<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Datasets — Alteryx</title>
        <link>https://community.alteryx.com/</link>
        <pubDate>Sun, 19 Apr 2026 12:23:39 +0000</pubDate>
        <language>en</language>
            <description>Datasets — Alteryx</description>
    <atom:link href="https://community.alteryx.com/discussions/tagged/datasets/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>Any ideas for my macro.</title>
        <link>https://community.alteryx.com/discussion/1436291/any-ideas-for-my-macro</link>
        <pubDate>Thu, 16 Apr 2026 08:22:33 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Dark_Star9</dc:creator>
        <guid isPermaLink="false">1436291@/discussions</guid>
        <description><![CDATA[<p>Hi there. Im a newbie into the macro world. My purpose is to output from each excels files and sheet an output with columns empty etc and the adress like L1C1 you know. From a tool DIR im taking my .xlsx files and output directly Full Path. Then im connecting into a macro for listing all the excel sheets and get a column names Full Path with C:\Users\Downloads\abc.xlsx|||Sheet1$. Hence from it im connecting into my second macro. My second macro is waiting for full path in control parameters and you have options to include or not empty rows and columns as checkboxes. From my Dir its working but when im adding this kind of excel into it first screenshot. The second one is my first macro for listing excel sheets and to make my column full path. On the third screenshot as u can see my real data is Col 3 and line 6 and the macro is giving an output on L1C1 WHY ? and im also losing Full Path from the macro input to the output. Do u guys have some ideas from that behavior ? TSYM 4 taking time to answer.</p><span data-embedjson="{&quot;url&quot;:&quot;https:\/\/us.v-cdn.net\/6038679\/uploads\/UWQ0TTX7ZN5U\/image.png&quot;,&quot;name&quot;:&quot;image.png&quot;,&quot;type&quot;:&quot;image\/png&quot;,&quot;size&quot;:16677,&quot;width&quot;:1026,&quot;height&quot;:584,&quot;displaySize&quot;:&quot;large&quot;,&quot;float&quot;:&quot;none&quot;,&quot;downloadUrl&quot;:&quot;https:\/\/community.alteryx.com\/api\/v2\/media\/download-by-url?url=https%3A%2F%2Fus.v-cdn.net%2F6038679%2Fuploads%2FUWQ0TTX7ZN5U%2Fimage.png&quot;,&quot;active&quot;:true,&quot;mediaID&quot;:255839,&quot;dateInserted&quot;:&quot;2026-04-16T08:10:46+00:00&quot;,&quot;insertUserID&quot;:642376,&quot;foreignType&quot;:&quot;embed&quot;,&quot;foreignID&quot;:&quot;642376&quot;,&quot;embedType&quot;:&quot;image&quot;,&quot;embedStyle&quot;:&quot;rich_embed_card&quot;}">
    <span>
        <a href="https://community.alteryx.com/home/leaving?allowTrusted=1&amp;target=https%3A%2F%2Fus.v-cdn.net%2F6038679%2Fuploads%2FUWQ0TTX7ZN5U%2Fimage.png" rel="nofollow noopener ugc" target="_blank">
            <img src="https://us.v-cdn.net/6038679/uploads/UWQ0TTX7ZN5U/image.png" alt="image.png" height="584" width="1026" data-display-size="large" data-float="none" data-type="image/png" data-embed-type="image" srcset="https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=300, width=300/6038679/uploads/UWQ0TTX7ZN5U/image.png 300w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=600, width=600/6038679/uploads/UWQ0TTX7ZN5U/image.png 600w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=800, width=800/6038679/uploads/UWQ0TTX7ZN5U/image.png 800w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=1200, width=1200/6038679/uploads/UWQ0TTX7ZN5U/image.png 1200w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=1600, width=1600/6038679/uploads/UWQ0TTX7ZN5U/image.png 1600w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=2000, width=2000/6038679/uploads/UWQ0TTX7ZN5U/image.png 2000w, https://us.v-cdn.net/6038679/uploads/UWQ0TTX7ZN5U/image.png" sizes="100vw" /></a>
    </span>
</span>
<span data-embedjson="{&quot;url&quot;:&quot;https:\/\/us.v-cdn.net\/6038679\/uploads\/ETHTXV01D1Q3\/image.png&quot;,&quot;name&quot;:&quot;image.png&quot;,&quot;type&quot;:&quot;image\/png&quot;,&quot;size&quot;:18360,&quot;width&quot;:573,&quot;height&quot;:390,&quot;displaySize&quot;:&quot;large&quot;,&quot;float&quot;:&quot;none&quot;,&quot;downloadUrl&quot;:&quot;https:\/\/community.alteryx.com\/api\/v2\/media\/download-by-url?url=https%3A%2F%2Fus.v-cdn.net%2F6038679%2Fuploads%2FETHTXV01D1Q3%2Fimage.png&quot;,&quot;active&quot;:true,&quot;mediaID&quot;:255840,&quot;dateInserted&quot;:&quot;2026-04-16T08:13:12+00:00&quot;,&quot;insertUserID&quot;:642376,&quot;foreignType&quot;:&quot;embed&quot;,&quot;foreignID&quot;:&quot;642376&quot;,&quot;embedType&quot;:&quot;image&quot;,&quot;embedStyle&quot;:&quot;rich_embed_card&quot;}">
    <span>
        <a href="https://community.alteryx.com/home/leaving?allowTrusted=1&amp;target=https%3A%2F%2Fus.v-cdn.net%2F6038679%2Fuploads%2FETHTXV01D1Q3%2Fimage.png" rel="nofollow noopener ugc" target="_blank">
            <img src="https://us.v-cdn.net/6038679/uploads/ETHTXV01D1Q3/image.png" alt="image.png" height="390" width="573" data-display-size="large" data-float="none" data-type="image/png" data-embed-type="image" srcset="https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=300, width=300/6038679/uploads/ETHTXV01D1Q3/image.png 300w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=600, width=600/6038679/uploads/ETHTXV01D1Q3/image.png 600w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=800, width=800/6038679/uploads/ETHTXV01D1Q3/image.png 800w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=1200, width=1200/6038679/uploads/ETHTXV01D1Q3/image.png 1200w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=1600, width=1600/6038679/uploads/ETHTXV01D1Q3/image.png 1600w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=2000, width=2000/6038679/uploads/ETHTXV01D1Q3/image.png 2000w, https://us.v-cdn.net/6038679/uploads/ETHTXV01D1Q3/image.png" sizes="100vw" /></a>
    </span>
</span>
<span data-embedjson="{&quot;url&quot;:&quot;https:\/\/us.v-cdn.net\/6038679\/uploads\/STNZ7DXTGVAI\/image.png&quot;,&quot;name&quot;:&quot;image.png&quot;,&quot;type&quot;:&quot;image\/png&quot;,&quot;size&quot;:32293,&quot;width&quot;:1137,&quot;height&quot;:601,&quot;displaySize&quot;:&quot;large&quot;,&quot;float&quot;:&quot;none&quot;,&quot;downloadUrl&quot;:&quot;https:\/\/community.alteryx.com\/api\/v2\/media\/download-by-url?url=https%3A%2F%2Fus.v-cdn.net%2F6038679%2Fuploads%2FSTNZ7DXTGVAI%2Fimage.png&quot;,&quot;active&quot;:true,&quot;mediaID&quot;:255841,&quot;dateInserted&quot;:&quot;2026-04-16T08:19:53+00:00&quot;,&quot;insertUserID&quot;:642376,&quot;foreignType&quot;:&quot;embed&quot;,&quot;foreignID&quot;:&quot;642376&quot;,&quot;embedType&quot;:&quot;image&quot;,&quot;embedStyle&quot;:&quot;rich_embed_card&quot;}">
    <span>
        <a href="https://community.alteryx.com/home/leaving?allowTrusted=1&amp;target=https%3A%2F%2Fus.v-cdn.net%2F6038679%2Fuploads%2FSTNZ7DXTGVAI%2Fimage.png" rel="nofollow noopener ugc" target="_blank">
            <img src="https://us.v-cdn.net/6038679/uploads/STNZ7DXTGVAI/image.png" alt="image.png" height="601" width="1137" data-display-size="large" data-float="none" data-type="image/png" data-embed-type="image" srcset="https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=300, width=300/6038679/uploads/STNZ7DXTGVAI/image.png 300w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=600, width=600/6038679/uploads/STNZ7DXTGVAI/image.png 600w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=800, width=800/6038679/uploads/STNZ7DXTGVAI/image.png 800w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=1200, width=1200/6038679/uploads/STNZ7DXTGVAI/image.png 1200w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=1600, width=1600/6038679/uploads/STNZ7DXTGVAI/image.png 1600w, https://us.v-cdn.net/cdn-cgi/image/quality=80, format=auto, fit=scale-down, height=2000, width=2000/6038679/uploads/STNZ7DXTGVAI/image.png 2000w, https://us.v-cdn.net/6038679/uploads/STNZ7DXTGVAI/image.png" sizes="100vw" /></a>
    </span>
</span>
]]>
        </description>
    </item>
    <item>
        <title>How do you create a new field that spells out a numeric value from another field?</title>
        <link>https://community.alteryx.com/discussion/1436159/how-do-you-create-a-new-field-that-spells-out-a-numeric-value-from-another-field</link>
        <pubDate>Thu, 09 Apr 2026 21:02:37 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Jdelagu11</dc:creator>
        <guid isPermaLink="false">1436159@/discussions</guid>
        <description><![CDATA[<div><p>Good evening everyone. Was hoping i could get ideas, or maybe steer me in the right direction, on how to build the below (see screenshot).</p><p>I have a numeric field called AMOUNT and i need to be able to create a new field that spells out whatever is in the AMOUNT field.</p><p>Is there a tool that will do that? Asked Chatgpt and it told me i need to build a macro, but i'm not the best at building macros. Let me know if anyone has ideas.&nbsp;</p><p>(oh i attached an example alteryx workflow if that helps)</p><p>Thanks so much.</p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/4f/4f3f84e192000e1d24673df1ee1d1196.png" role="button" title="Example_Amt.png" alt="Example_Amt.png" /></span></p></div>
]]>
        </description>
    </item>
    <item>
        <title>Generate a list of workflow in each collection</title>
        <link>https://community.alteryx.com/discussion/1381508/generate-a-list-of-workflow-in-each-collection</link>
        <pubDate>Tue, 01 Apr 2025 21:17:22 +0000</pubDate>
        <category>Alteryx Server</category>
        <dc:creator>bsk_93</dc:creator>
        <guid isPermaLink="false">1381508@/discussions</guid>
        <description><![CDATA[<div><p>Hi All,&nbsp;</p><p>I'm looking to generate a list of workflow in each collection level.&nbsp;</p><p>How to generate and what is the process to create a workflow?</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Display custom message on Alteryx Server</title>
        <link>https://community.alteryx.com/discussion/889351/display-custom-message-on-alteryx-server</link>
        <pubDate>Tue, 11 Jan 2022 08:19:49 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>TandelGouravi</dc:creator>
        <guid isPermaLink="false">889351@/discussions</guid>
        <description><![CDATA[<div><p>Hi All.</p><p>I am trying to display a custom message on the Alteryx server after a certain condition is met.</p><p>Using Message tool is helpful on the Designer but it does not work on the Server.</p><p>I have tried using the Output Message option available at the Interface Designer Setting Option but it is not of much help as my message is not a fixed message.</p><p>Thank Your for the help in advance.</p><p>Thanks and regards<br />Gouravi Tandel&nbsp;</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Rename of just one column heading from data based in a column</title>
        <link>https://community.alteryx.com/discussion/1434901/rename-of-just-one-column-heading-from-data-based-in-a-column</link>
        <pubDate>Tue, 24 Mar 2026 11:38:54 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>JamesPorter</dc:creator>
        <guid isPermaLink="false">1434901@/discussions</guid>
        <description><![CDATA[<div><p>Is there a way (i thought it would be though dynamic rename, but not sure if this is the right way now) of how you can take the data from the first row of data, and add it into the column heading?</p><p>So in the images, I want the first row of plant, to be added onto OpenSO Qty Balance as per the second shot.</p><p>To add context, I have 3 input feeds and if something changes I want them to pick up the relevant plant that is there to the output.</p><p>Thank you in advance.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Split DataFile by Account Lead with column headings formatted in Output</title>
        <link>https://community.alteryx.com/discussion/1434367/split-datafile-by-account-lead-with-column-headings-formatted-in-output</link>
        <pubDate>Tue, 17 Mar 2026 15:21:02 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>glemieux</dc:creator>
        <guid isPermaLink="false">1434367@/discussions</guid>
        <description><![CDATA[<div><p>I asked a question yesterday to learn how to split a datafile by AccountLead, the response I receive works well, thank you! However, I would like my output files to have formatted column headings with colors. I have tried a few different options, including using a template for my output but the issue when I do that is that the flow crating multiple files is not working. I tried the Table tool but same thing I have an issue with creating multiple output files.&nbsp; Any ideas would be appreciated.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Finding duplicates within a number range of another line</title>
        <link>https://community.alteryx.com/discussion/1433467/finding-duplicates-within-a-number-range-of-another-line</link>
        <pubDate>Fri, 06 Mar 2026 14:50:54 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Kearnd967</dc:creator>
        <guid isPermaLink="false">1433467@/discussions</guid>
        <description><![CDATA[<div><p>I have a large data set of ranges of a transport network.</p><p>I would like to find duplicates on the Start Miles/yards and End Miles/yards.&nbsp; This is easy enough with a unique tool.&nbsp; However, I also want to find duplicate entries that fit inside the widest range, for example.</p><p>Start mile 29 start yard 770 end mile 29 end yard 109, if there was an entry such as start mile 29, start yard 769 and end mile 29 and end yard 110 I would want to highlight this as a duplicate.</p><p>The key unique fields would be the "ELR", "TID", "Direction" fields that must match on each occassion to observe a duplicate.</p><p>I am stumped on this one and would really appreciate your help.</p><p>David.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Record Id for the data</title>
        <link>https://community.alteryx.com/discussion/1433805/record-id-for-the-data</link>
        <pubDate>Tue, 10 Mar 2026 15:15:42 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Alteryxexpert</dc:creator>
        <guid isPermaLink="false">1433805@/discussions</guid>
        <description><![CDATA[<div><p>I have 300 Records for 12 months of data, that is 25 rows per month Group By Month and value in Descending order. I want to put a record Id for all the records, for example for the 25 records in Jan i want to keep the record ID as 1 to 25 and it has to be the same 1 to 25 for all the other months for total 300 records. how to acheive this?</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Calculating Monthly Average for the below Data</title>
        <link>https://community.alteryx.com/discussion/1433320/calculating-monthly-average-for-the-below-data</link>
        <pubDate>Thu, 05 Mar 2026 12:18:56 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Alteryxexpert</dc:creator>
        <guid isPermaLink="false">1433320@/discussions</guid>
        <description><![CDATA[<div><p>Hi&nbsp;</p><p>I have the data as attached for 5 year. I need to calculate Each month average. for example for Jan 2025 - The average should calculated by Dec'24+Jan'25/2 and for Feb'25 it should be&nbsp;Jan'25+Feb'25/2 and for all the years data it should calculate in this way<br /><br />How to Acheive this through Alteryx? Can someone help me out it this?</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Current date minus one business day</title>
        <link>https://community.alteryx.com/discussion/1179498/current-date-minus-one-business-day</link>
        <pubDate>Thu, 24 Aug 2023 12:28:36 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Saravanan13</dc:creator>
        <guid isPermaLink="false">1179498@/discussions</guid>
        <description><![CDATA[<div><p>Hello All,</p><p>I need a formula to derive previous business date&nbsp; from current date. Can anyone assist.</p><p>Example -</p><p>Input</p><p>Today's date - 08/21/2023</p><p>Output -</p><p>Previous business date -&nbsp;08/18/2023</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Data Prep/Grouping</title>
        <link>https://community.alteryx.com/discussion/1433087/data-prep-grouping</link>
        <pubDate>Sun, 01 Mar 2026 17:57:31 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>gfisch13</dc:creator>
        <guid isPermaLink="false">1433087@/discussions</guid>
        <description><![CDATA[<div><p>HI folks, I need some help summarizing some data.&nbsp; In the attached file I'm trying to group all data at the ClaimNum level.&nbsp; I'd like to have one record per claim which means that the LossAddress, Provider and Doctor fields will all represent multiple values in their respective fields.&nbsp; The LossAddress field is simplest since it would only reflect a single value, but the Provider and Doctor fields should have multiple values each.&nbsp;&nbsp;</p><p>Any guidance is appreciated!</p><p>Thanks,&nbsp;</p><p>George</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Matching Values in a list</title>
        <link>https://community.alteryx.com/discussion/1432896/matching-values-in-a-list</link>
        <pubDate>Thu, 26 Feb 2026 19:30:05 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>gfisch13</dc:creator>
        <guid isPermaLink="false">1432896@/discussions</guid>
        <description><![CDATA[<div><p>Hi Folks,</p><p>I have a list of values, about 25, that I want to match against a field in my master data set.&nbsp; I'm creating a indicator field with a value (Y) that indicates a match.&nbsp; &nbsp;I cannot find a function that allows me to add all the match values.&nbsp; In other software packages it would be called, INLIST or MATCH....does Alteryx have such a tool?</p><p>Thanks,</p><p>George</p><p>Example:&nbsp; &nbsp;MATCH([match_field], value1, value2, value3.......etc)&nbsp;</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Spilt Multiple Fields in the same column into</title>
        <link>https://community.alteryx.com/discussion/1430478/spilt-multiple-fields-in-the-same-column-into</link>
        <pubDate>Wed, 04 Feb 2026 09:30:43 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>Haydn_Cook</dc:creator>
        <guid isPermaLink="false">1430478@/discussions</guid>
        <description><![CDATA[<div><p>I need to try and split 3 different fields from the same column, at the moment they are under one general category, but i need them into seperate sections. Anybody know a way I can only select one type of text data. So I need to get Global ECM seperated from the Global M&amp;A.</p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/a5/a58f4fee15307f293b2d77f9b19a80e3.png" role="button" title="Haydn_Cook_0-1770197341680.png" alt="Haydn_Cook_0-1770197341680.png" /></span></p></div>
]]>
        </description>
    </item>
    <item>
        <title>Multi-Row Formula for Multi Columns and Rows</title>
        <link>https://community.alteryx.com/discussion/1431505/multi-row-formula-for-multi-columns-and-rows</link>
        <pubDate>Fri, 13 Feb 2026 05:24:35 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>kristopherlam</dc:creator>
        <guid isPermaLink="false">1431505@/discussions</guid>
        <description><![CDATA[<div><p>Dear all, I have several excel formulas that I want to translate to Alteryx. I tried to adopt Multi-row formula to update the row, however, the dependency between each field would cause difference answer after I reapply the formula.</p><p>e.g.&nbsp;</p><table><tbody><tr><td>DATE</td><td>&nbsp;UNIT_POSITIVE&nbsp;</td><td>&nbsp;UNIT_NEGATIVE&nbsp;</td><td>&nbsp;MKT PRICE&nbsp;</td><td>&nbsp;Unit c/f&nbsp;</td><td>&nbsp;Avg price&nbsp;</td><td>&nbsp;Amount c/f&nbsp;</td></tr><tr><td>12/15/2025</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 22.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td></tr><tr><td>12/16/2025</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 60.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 23.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 60.00</td><td>&nbsp;&nbsp; 23.0000</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1,380.00</td></tr><tr><td>12/17/2025</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 50.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (25.00)</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 23.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 85.00</td><td>&nbsp;&nbsp; 23.0000</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1,955.00</td></tr><tr><td>12/18/2025</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 22.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 85.00</td><td>&nbsp;&nbsp; 23.0000</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1,955.00</td></tr><tr><td>12/19/2025</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 22.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 85.00</td><td>&nbsp;&nbsp; 23.0000</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1,955.00</td></tr><tr><td>12/20/2025</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (8.00)</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 22.00</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 82.00</td><td>&nbsp;&nbsp; 22.9390</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1,881.00</td></tr></tbody></table><p>but Avg price and Amount c/f&nbsp; are newly created from the original data set</p><p>Appreciate if anyone is willing to give a helping hand, thank you so much.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Dynamically realign misaligned multi-row headers</title>
        <link>https://community.alteryx.com/discussion/1431133/dynamically-realign-misaligned-multi-row-headers</link>
        <pubDate>Tue, 10 Feb 2026 12:59:28 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>bling_</dc:creator>
        <guid isPermaLink="false">1431133@/discussions</guid>
        <description><![CDATA[<div><p>I am dealing with a misaligned table where column headers are spilt across multiple rows and shifted into the wrong positions. I need to dynamically move the header text to the correct columns and align them properly. (Each column will need to start with its correct title case header)</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Finding and Replacing an Abbreviation</title>
        <link>https://community.alteryx.com/discussion/1431158/finding-and-replacing-an-abbreviation</link>
        <pubDate>Tue, 10 Feb 2026 16:04:56 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>mfranchino</dc:creator>
        <guid isPermaLink="false">1431158@/discussions</guid>
        <description><![CDATA[<div><p>Hello - thanks for the help in advance.&nbsp;<br /><br />I have cells that follow this format:<br /><br />qual RES1 'Restaurant' = 'Five Guys/Burger {US}'<br />qual GY1 'Gym' = 'Planet Fitness'<br />qual LOC1 'Location' = 'New York/{Bronx, Brooklyn}'<br />concept 'Example 1'</p><p>where 'School' = 'Syracuse University'&nbsp;<br />&nbsp; and RES1</p><p>&nbsp; and 'Meal Type' = 'Dinner'</p><p>concept 'Example 2'<br />where GY1</p><p>&nbsp; and 'Day' = 'Wednesday'&nbsp;</p><p>&nbsp; and LOC1<br /><br /><br />Above is just an example, but all my cells will follow a similar format. What i need done:<br /><br />I want all of the qual values at the top to be replaced by their full values throughout the cell. So if in the cell, there is a reference to GY1, I want it to read 'Gym' = 'Planet Fitness'<br /><br />So in the above, my desired output would look like:<br /><br /></p><p>qual RES1 'Restaurant' = 'Five Guys/Burger {US}'<br />qual GY1 'Gym' = 'Planet Fitness'<br />qual LOC1 'Location' = 'New York/{Bronx, Brooklyn}'<br />concept 'Example 1'</p><p>where 'School' = 'Syracuse University'&nbsp;<br />&nbsp; and&nbsp; 'Restaurant' = 'Five Guys/Burger {US}'</p><p>&nbsp; and 'Meal Type' = 'Dinner'</p><p>concept 'Example 2'<br />where&nbsp; 'Gym' = 'Planet Fitness'</p><p>&nbsp; and 'Day' = 'Wednesday'&nbsp;</p><p>&nbsp; and&nbsp;'Location' = 'New York/{Bronx, Brooklyn}'<br /><br />Please note the three quals above are examples and my quals will be different, however they will follow the same format and be at the top of the cell. they represent a key that is referenced throughout my cells. I want basically a find and replace done with the qual values throughout the cell.&nbsp;</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Sorting a list based on multiple criteria</title>
        <link>https://community.alteryx.com/discussion/1430302/sorting-a-list-based-on-multiple-criteria</link>
        <pubDate>Mon, 02 Feb 2026 16:11:00 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>RCern</dc:creator>
        <guid isPermaLink="false">1430302@/discussions</guid>
        <description><![CDATA[<div><p>I have a data set that I am trying to apply logic to and I don't know the best way to go about it.</p><p>Attached is a screenshot of an example set of the data.</p><p>If a Test Cycle has both "A" and "C" as the Status for the same "Type", I want only the row with the 'A" to appear.&nbsp; The workflow can disregard the rows with C.&nbsp; So in the example, for Test Cycle 21 since the "Type" Apple has both an A and C row, I just want the row A to remain.</p><p>For test cycle 34, they have two types, Apples and Bananas.&nbsp; Since Test Cycle 34 has an A and C for Apples, just the row with "A" should appear for Apples.&nbsp; For Bananas</p><p>, since just Status C appears, I want Alteryx to give me one row with the "C" and the max Canc Dt for those three rows for Test Cycle 34.</p><p>Is this easily doable in Alteryx?</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Error while Loading Data into AWS S3 Athena using Foresight Connection</title>
        <link>https://community.alteryx.com/discussion/1427097/error-while-loading-data-into-aws-s3-athena-using-foresight-connection</link>
        <pubDate>Wed, 24 Dec 2025 13:17:55 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>AshokKumarBobbala</dc:creator>
        <guid isPermaLink="false">1427097@/discussions</guid>
        <description><![CDATA[<div><div><div><p>Hi Team,</p><p>I am trying to <strong>load data into AWS S3 Athena using a Foresight connection in Alteryx</strong> and encountering the following error.</p></div></div><div><div><p><strong>Workflow Tools Used: (Workflow Image attached)</strong></p><ul><li><strong>Input Data</strong> (In/Out)</li><li><strong>Select</strong> (Preparation)</li><li><strong>StreamIn</strong> (In-Database)</li></ul></div></div><div><strong>Error Message:</strong></div><pre><code>StreamIn (52) ERROR [HY000] [AmazonAthena][AthenaClientError]: ExceptionName: InvalidRequestException, ErrorType: 130, ExceptionMessage: Unable to verify/create output bucket aws-athena-query-results-0719165104-us-east-1&para;OdbcException&para;   at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode) &para;   at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod) &para;   at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader) &para;   at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior) &para;   at org.ForeSight.Tools.Athena.ExecuteScalarQuery(String dsn, String query) &para;   at org.ForeSight.Tools.LakeFormation.GetStagingLocationUri(String odbcDsn, String prefix) &para;   at org.ForeSight.Alteryx.StreamInEngine.WriteData() &para;   at org.ForeSight.Alteryx.StreamInEngine.II_Close() &para;   at SRC.Alteryx.GenericIncomingConnectionHandler.II_Close(GenericIncomingConnectionHandler* )</code></pre><div><p><strong>Questions:</strong></p><ol><li>What does this error mean?</li><li>How can I resolve this issue and successfully load data into AWS S3 Athena using Foresight?</li></ol></div></div>
]]>
        </description>
    </item>
    <item>
        <title>Calgary Tools Being Super Slow</title>
        <link>https://community.alteryx.com/discussion/1429166/calgary-tools-being-super-slow</link>
        <pubDate>Wed, 21 Jan 2026 21:39:33 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>jenny17</dc:creator>
        <guid isPermaLink="false">1429166@/discussions</guid>
        <description><![CDATA[<div><p><span>I have tested every update since version&nbsp;</span><strong>2024.2.1.41</strong><span>, and none of them have resolved the issues with the Calgary tool. Whether using the AMP engine or not, the Calgary tool is ten times slower than it was in the earlier version. This needs to be investigated seriously, as it represents a significant bug for any users relying on indexing. I have been installing the new versions and testing it, then uninstall it to use&nbsp;</span><strong>2024.2.1.41.</strong></p></div>
]]>
        </description>
    </item>
    <item>
        <title>Building a Modern Data Pipeline with Databricks and Alteryx</title>
        <link>https://community.alteryx.com/discussion/1428339/building-a-modern-data-pipeline-with-databricks-and-alteryx</link>
        <pubDate>Wed, 14 Jan 2026 14:00:00 +0000</pubDate>
        <category>Blogs</category>
        <dc:creator>BenoitC</dc:creator>
        <guid isPermaLink="false">1428339@/discussions</guid>
        <description><![CDATA[<div><h2 id="toc-hId--1328490576"><strong>Problem Statement</strong></h2><p><span style="font-size: large;">Today, many organisations face the same recurring challenge:</span></p><p><span style="font-size: large;"><strong>Data is engineered in one place, analysed in another, and the connection between the two is often manual, fragile, or inefficient.</strong></span></p><ul><li><span style="font-size: large;">Data engineers work in Databricks, designing scalable pipelines and Lakehouse architectures.</span></li><li><span style="font-size: large;">Business analysts work in Alteryx or BI tools, needing clean, trusted, up-to-date data to make decisions.</span></li><li><span style="font-size: large;">IT teams struggle to provide governed, real-time access without duplicating data or adding operational overhead.</span></li><li><span style="font-size: large;">And small teams or newcomers often believe that building a modern data pipeline requires heavy infrastructure or enterprise licences.</span></li></ul><p><span style="font-size: large;">As a result, companies often end up with:</span></p><ul><li><span style="font-size: large;">Inconsistent datasets across teams,</span></li><li><span style="font-size: large;">Delays between engineering and analytics,</span></li><li><span style="font-size: large;">Repeated data extraction or replication,</span></li><li><span style="font-size: large;">Difficulty operationalising insights,</span></li><li><span style="font-size: large;">Frustration on both sides of the &ldquo;tech vs business&rdquo; gap.<br /><br /></span></li></ul><p><span style="font-size: large;"><strong>This article addresses exactly that problem.<br /><br /></strong></span></p><p><span style="font-size: large;">By using only <strong>Databricks Free Edition</strong> and <strong>Alteryx One</strong>, we demonstrate that anyone can:</span></p><ul><li><span style="font-size: large;">Build a structured <strong>Bronze / Silver / Gold pipeline</strong> using Delta Lake</span></li><li><span style="font-size: large;">Expose an analytics-ready table through a <strong>Databricks Serverless SQL Warehouse</strong></span></li><li><span style="font-size: large;">Connect Alteryx Cloud via <strong>Live Query</strong> without moving or duplicating data</span></li><li><span style="font-size: large;">Enrich the dataset with business logic in a <strong>no-code Alteryx workflow</strong></span></li><li><span style="font-size: large;">Publish a clean dataset within minutes<br /><br /></span></li></ul><h2 id="toc-hId-1159022257"><strong>The value?</strong></h2><p><span style="font-size: large;">A fully modern, end-to-end, reproducible analytics pipeline, accessible to both data engineers and business users, without needing a full cloud environment or complex infrastructure.</span><br /><br /></p><p><span style="font-size: large;">If your goal is to understand how to connect the Lakehouse world (Databricks) with the no-code analytics world (Alteryx), this article shows the how and the why through a practical example you can reproduce today.<br /><br /></span></p><p><span style="font-size: large;"><strong>1. Introduction: Why Databricks and Alteryx?</strong></span></p><p><span style="font-size: large;">In this article, I&rsquo;ll walk through a simple yet powerful end-to-end workflow demonstrating how to combine <strong>Databricks</strong> for scalable data engineering with <strong>Alteryx One</strong>&nbsp;for intuitive, no-code analytics.<br /><br /></span></p><p><span style="font-size: large;">Even with only:</span></p><ul><li><span style="font-size: large;">Databricks Free Edition,</span></li><li><span style="font-size: large;">a single CSV file,</span></li><li><span style="font-size: large;">a small Excel reference table,</span></li></ul><p><span style="font-size: large;">&hellip;it&rsquo;s possible to build a pipeline inspired by modern <strong>Medallion Architecture</strong>, expose clean Delta tables, and make them instantly consumable through <strong>Live Query in Alteryx One</strong>.<br /><br /></span></p><p><span style="font-size: large;">The goal is not to replicate a full enterprise setup, but to show how both platforms complement each other and accelerate analytics for technical and business users alike.<br /><br /></span></p><p><span style="font-size: large;"><strong>2. End-to-End Architecture Overview</strong></span></p><p><span style="font-size: large;"><br />Here is the architecture we will build:</span></p><ul><li><span style="font-size: large;"><strong>Ingest</strong> a CSV file into Databricks</span></li><li><span style="font-size: large;">Apply <strong>Bronze &rarr; Silver &rarr; Gold</strong> transformations</span></li><li><span style="font-size: large;">Store the refined table as <strong>Delta Lake table</strong></span></li><li><span style="font-size: large;">Connect <strong>Alteryx Cloud Live Query</strong> to the Gold table</span></li><li><span style="font-size: large;"><strong>Enrich</strong> the dataset with an Excel file (business targets and reference data)</span></li><li><span style="font-size: large;">Perform no-code transformations in Alteryx</span></li><li><span style="font-size: large;">Publish a <strong>Power BI dashboard</strong> for final insights<br /><br /></span></li></ul><p><span style="font-size: large;">The key message is:</span><br /><span style="font-size: large;"><strong>Databricks handles scalable data preparation, Alteryx unlocks business-ready analytics.<br /><br /></strong></span></p><p><span style="font-size: large;"><strong>3. Databricks Pipeline: Simple, Reproducible, and Modern<br /><br /></strong></span></p><p><span style="font-size: large;">Even with the Free Edition, Databricks provides everything needed to structure a clear data engineering workflow using Delta Lake and notebooks.<br /><br /></span></p><p><span style="font-size: large;">Onboarding for the Free Edition is very easy. You can sign up in just a few clicks by searching &ldquo;Databricks Free Edition&rdquo; and opening the official link.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/71/716ef9406439ac680f2f7c8c95e52468.png" role="button" title="BenoitC_0-1768236498950.png" alt="BenoitC_0-1768236498950.png" /></span></p><p><span style="font-size: large;">You can sign up for the Free Edition here:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/ab/ab1ec85cbfcae9b83973afe9fe9dab68.png" role="button" title="BenoitC_1-1768236498960.png" alt="BenoitC_1-1768236498960.png" /></span></p><p><span style="font-size: large;">Once you complete the initial steps, you now have access to Databricks, congratulations!<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/a1/a1bff898231cf6de6cdeb308f90392e1.png" role="button" title="BenoitC_2-1768236498968.png" alt="BenoitC_2-1768236498968.png" /></span></p><p><span style="font-size: large;"><strong>3.1 Bronze &ndash; Raw Ingestion</strong></span></p><p><span style="font-size: large;">We start by uploading a CSV file into DBFS (or directly from a cloud bucket if preferred).</span></p><p><span style="font-size: large;">By clicking &ldquo;Upload Data,&rdquo; you can directly add flat files into Databricks. For the purpose of this article, we keep things simple by adding raw data directly into DBFS.</span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/2d/2d954994076b3c31bb65fd1723764bc4.png" role="button" title="BenoitC_3-1768236498976.png" alt="BenoitC_3-1768236498976.png" /></span></p><p><span style="font-size: large;">The process is straightforward.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/d8/d8661689d999a79a1aa9732368e6ed63.png" role="button" title="BenoitC_4-1768236498984.png" alt="BenoitC_4-1768236498984.png" /></span></p><p><span style="font-size: large;">Databricks automatically converts the uploaded file into a Delta table, allowing us to preserve raw data in a single, unified environment.</span></p><p><span style="font-size: large;">Opening a notebook, we can now see that our table is available:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/3f/3f2af28fd8ebab5c16ef92052ea18cf2.png" role="button" title="BenoitC_5-1768236498988.png" alt="BenoitC_5-1768236498988.png" /></span></p><p><span style="font-size: large;">Databricks also provides Serverless clusters, meaning you don&rsquo;t need to configure or manage any compute to start working with your data. It just works, Databricks handles all the compute in the background.</span></p><p><span style="font-size: large;">Our files are now fully available in Databricks.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/7e/7e6c78f6999935c0ddf8e7714654f69a.png" role="button" title="BenoitC_6-1768236498995.png" alt="BenoitC_6-1768236498995.png" /></span></p><p><span style="font-size: large;">To simulate a production environment, we now copy our data from Raw to Bronze. Databricks structures data into three layers: Bronze (raw), Silver (cleaned and standardized), and Gold (analytics-ready).<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/24/240a1f6904ddb9d040e0cb0fd16d0c35.png" role="button" title="BenoitC_7-1768236499006.png" alt="BenoitC_7-1768236499006.png" /></span></p><p><span style="font-size: large;">Tables are now created and ready for cleansing in the Silver layer.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/5a/5a97346ffabb15e6882f798ded28f460.png" role="button" title="BenoitC_8-1768236499013.png" alt="BenoitC_8-1768236499013.png" /></span></p><p><span style="font-size: large;">We are now ready to move to the next stage.</span></p><p><span style="font-size: large;"><strong>3.2 Silver &ndash; Cleaning &amp; Standardization</strong></span></p><p><span style="font-size: large;">The Silver layer produces a clean, consistent dataset that enables value creation in the downstream Gold layer. Uncleaned data often contains inconsistent types, missing values, duplicate records, and other quality issues.</span></p><p><span style="font-size: large;">To do this, we stay in the same notebook and switch to Python, demonstrating Databricks&rsquo; flexibility by allowing users to choose the language they prefer. We start using PySpark SQL so we can easily manipulate the data directly in our notebook.</span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/5b/5b65edad741c6445455548bbb3304c9b.png" role="button" title="BenoitC_9-1768236499020.png" alt="BenoitC_9-1768236499020.png" /></span></p><p><span style="font-size: large;">In a single step, we can now see clean data in our Silver layer after applying correct data types, recalculating amounts, and adding quality filters.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/2e/2ec6d6f335380b58b3379723e5be1b42.png" role="button" title="BenoitC_10-1768236499030.png" alt="BenoitC_10-1768236499030.png" /></span></p><p><span style="font-size: large;">This step can be directly automated from the Notebook interface, allowing us to eliminate manual effort and reduce operational toil.<br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/bf/bf341ffc2886f0731555286df185da26.png" role="button" title="BenoitC_11-1768236499036.png" alt="BenoitC_11-1768236499036.png" /></span></p><p><span style="font-size: large;">We are now ready in Databricks to build our Gold layer.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/e4/e4dc7c01ff26c1795071a27ee75d75f5.png" role="button" title="BenoitC_12-1768236499045.png" alt="BenoitC_12-1768236499045.png" /></span></p><p><span style="font-size: large;"><strong>3.3 Gold &ndash; Analytics-Ready Table<br /><br /></strong></span></p><p><span style="font-size: large;">We simply run a LEFT JOIN between our two Silver tables to produce the Gold table, which is now ready for downstream analytics.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/42/42a3ba5b50d3abcd03d0832507884b93.png" role="button" title="BenoitC_13-1768236499049.png" alt="BenoitC_13-1768236499049.png" /></span></p><p><span style="font-size: large;">We can now run our Alteryx workflow on this data.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/22/222141da6c2747146494615a3c45c8b5.png" role="button" title="BenoitC_14-1768236499062.png" alt="BenoitC_14-1768236499062.png" /></span></p><p><span style="font-size: large;">We treat sales_silver as our transactional fact table (each row represents a transaction) and customers_silver as our cleaned customer dimension.<br /><br /></span></p><p><span style="font-size: large;">In the Gold layer, we bring both together into a single fact_sales_gold table, which is the one we expose to Alteryx via Live Query.<br /><br /></span></p><p><strong>4. Connecting Alteryx One to Databricks with Live Query</strong></p><p><span style="font-size: large;"><br />Alteryx One now allows us to use all Alteryx products in a single, seamless experience, whether on the cloud or on a laptop. We first connect to our Databricks data using Alteryx One. To do this, we go to the Alteryx One homepage, click on our profile, and navigate to Workspace Admin.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/3c/3c33988dc6e0c4232216181d348a648b.png" role="button" title="BenoitC_15-1768236499071.png" alt="BenoitC_15-1768236499071.png" /></span></p><p><span style="font-size: large;">Here we can see the Databricks menu, where we can provision our Databricks workspace:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/02/0263ceae235b5ef16fa08232b2d22254.png" role="button" title="BenoitC_16-1768236499075.png" alt="BenoitC_16-1768236499075.png" /></span></p><p><span style="font-size: large;">This information can be found easily in Databricks.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/5d/5d7491e471e5879366c1a8404d0b7d85.png" role="button" title="BenoitC_17-1768236499078.png" alt="BenoitC_17-1768236499078.png" /></span></p><p><span style="font-size: large;">The service URL is the portion of your Databricks workspace URL up to cloud.databricks.com.</span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/24/2415cc51dddbc456d36b5fe6086115dd.png" role="button" title="BenoitC_18-1768236499083.png" alt="BenoitC_18-1768236499083.png" /></span></p><p><span style="font-size: large;">We now return to Databricks to generate the Personal Access Token (PAT). Be mindful of the security implications: these keys should never be shared. Go to your profile, open Settings &rarr; Developer, and generate a new token as shown below. Paste this token into Alteryx.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/b7/b7ddcbd347ea0b989bba157e764ec509.png" role="button" title="BenoitC_19-1768236499088.png" alt="BenoitC_19-1768236499088.png" /></span></p><p><span style="font-size: large;">Everything is now set. You just need to fill in the remaining information:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/01/01df946374a8df4773cb8781e17691e3.png" role="button" title="BenoitC_20-1768236499093.png" alt="BenoitC_20-1768236499093.png" /></span></p><p><span style="font-size: large;">As a final step, in the Data tab, we simply need to add the connection &mdash; and we are all set:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/40/4087241d6509c352dac581f424e170c9.png" role="button" title="BenoitC_21-1768236499095.png" alt="BenoitC_21-1768236499095.png" /></span></p><p><span style="font-size: large;">Just add a connection name, all information has already been filled in:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/4f/4f8701a4ba0c38974ea90e3d5e1d934e.png" role="button" title="BenoitC_22-1768236499100.png" alt="BenoitC_22-1768236499100.png" /></span></p><p><span style="font-size: large;">Once connected, Alteryx queries the Delta table live, without moving or duplicating data, a perfect fit for Lakehouse patterns..<br /><br /></span></p><p><span style="font-size: large;">This allows data engineers to refine the pipeline in Databricks while analysts explore the same data instantly in Alteryx.<br /><br /></span></p><p><span style="font-size: large;"><strong>5. No-Code Business Enrichment in Alteryx</strong></span></p><p><span style="font-size: large;"><br />We can now select our Gold table and begin working with our data:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/87/8707632c92bf1a36035ef18f567916a8.png" role="button" title="BenoitC_23-1768236499103.png" alt="BenoitC_23-1768236499103.png" /></span></p><p><span style="font-size: large;">When loading our data in Alteryx, nothing is actually imported into the backend. Everything stays in Databricks, keeping costs low and minimizing data movement:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/22/22653e1b5c7e2e78681cf9972b734707.png" role="button" title="BenoitC_24-1768236499106.png" alt="BenoitC_24-1768236499106.png" /></span></p><p><span style="font-size: large;">This is where Alteryx shines: turning a curated dataset into actionable business insights &mdash; without writing code.<br /><br /></span></p><p><span style="font-size: large;">We now create an Alteryx workflow in Designer Cloud. From the homepage, click Create New &rarr; Designer Cloud to begin:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/ce/ceb86c1498a890cc297f97b4bc156fe5.png" role="button" title="BenoitC_25-1768236499110.png" alt="BenoitC_25-1768236499110.png" /></span></p><p><span style="font-size: large;">We can now add an Input tool and start working with our Gold table from Databricks:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/21/216f6e7f1ca129d486cb588c0885d713.png" role="button" title="BenoitC_26-1768236499114.png" alt="BenoitC_26-1768236499114.png" /></span></p><p><span style="font-size: large;">By default, <a href="https://help.alteryx.com/aac/en/designer-experience/workflows/livequery.html" target="_blank" rel="noopener nofollow noreferrer">LiveQuery</a> is enabled, allowing us to use the entire dataset directly in our browser without any replication in the Alteryx infrastructure. This is a major advantage. It enables full pushdown processing and allows users to leverage no-code tools without copying data, relying instead on the powerful scaling capabilities of Databricks.<br /><br /></span></p><p><span style="font-size: large;">You can verify whether Live Query is enabled by clicking your profile icon (top right), navigating to Workspace Admin &rarr; Settings, and checking the Enable Live Query option.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/7c/7c327682ec557f132fd4d71ce061cc47.png" role="button" title="BenoitC_27-1768236499120.png" alt="BenoitC_27-1768236499120.png" /></span></p><p><span style="font-size: large;">We can now import our Excel file into Alteryx One:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/b6/b66dd2f9d251603ab20f06d43a840b76.png" role="button" title="BenoitC_28-1768236499125.png" alt="BenoitC_28-1768236499125.png" /></span></p><p><span style="font-size: large;">We can now add the remaining tools and easily prep and blend the data, without writing a single line of code:<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/11/11c01646b3415a41f5572497ac0a8c6c.png" role="button" title="BenoitC_29-1768236499131.png" alt="BenoitC_29-1768236499131.png" /></span></p><p><span style="font-size: large;">And the beauty of this approach is that all processing happens in Databricks.<br /><br /></span></p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/89/89bd9e1a7a0a2229a6480148c2335395.png" role="button" title="BenoitC_30-1768236499136.png" alt="BenoitC_30-1768236499136.png" /></span></p><p><span style="font-size: large;"><strong><br />7. Combined Benefits of Databricks + Alteryx<br /><br /></strong></span></p><p><span style="font-size: large;"><strong>&#128999; What Databricks brings</strong></span></p><ul><li><span style="font-size: large;">scalable Spark compute</span></li><li><span style="font-size: large;">strong data engineering foundations</span></li><li><span style="font-size: large;">Delta Lake performance &amp; reliability</span></li><li><span style="font-size: large;">structured Medallion architecture (Bronze / Silver / Gold)<br /><br /></span></li></ul><p><span style="font-size: large;"><strong>&#128998; What Alteryx brings</strong></span></p><ul><li><span style="font-size: large;">no-code transformation for business users</span></li><li><span style="font-size: large;">governed access to Databricks tables</span></li><li><span style="font-size: large;">fast iteration for analytics and enrichment</span></li><li><span style="font-size: large;">seamless export to BI tools<br /><br /></span></li></ul><p><span style="font-size: large;"><strong>&#129002; Together</strong></span></p><p><span style="font-size: large;">Together, they deliver a modern, efficient workflow that bridges engineering and business teams &mdash; without unnecessary complexity.<br /><br /></span></p><p><span style="font-size: large;"><strong>8. Conclusion<br /><br /></strong></span></p><p><span style="font-size: large;">This project demonstrates that, even with minimal resources, Databricks Free Edition and an Alteryx One environment, it is entirely possible to build a modern Lakehouse-style pipeline and deliver business-ready insights.<br /><br /></span></p><p><span style="font-size: large;">Databricks provides the <strong>engine</strong>, Alteryx provides the <strong>experience</strong>, and together they accelerate analytics from raw data to actionable value.</span></p></div>
]]>
        </description>
    </item>
    <item>
        <title>How can I use the Output Data in part 1 of my workflow as the Input Data in part 2....</title>
        <link>https://community.alteryx.com/discussion/1428342/how-can-i-use-the-output-data-in-part-1-of-my-workflow-as-the-input-data-in-part-2</link>
        <pubDate>Mon, 12 Jan 2026 17:13:12 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>IMERNDO</dc:creator>
        <guid isPermaLink="false">1428342@/discussions</guid>
        <description><![CDATA[<div><p>...without tool containers (which I currently use).</p><p>The workflow is set up like this.</p><p><span><img src="https://us.v-cdn.net/6038679/uploads/images/0a/0a790f29c1ad9ad884907fd2f93f7204.png" role="button" title="Screenshot.png" alt="Screenshot.png" /></span></p><p>The Database (Output Tool) in container #1 is the same Database (Input Tool) in container #2. I always need to update it before running the Macro (container #2), which means Run #1, Disable #1, Enable #2, Run #2. I know it's just a few clicks but I do this a lot and would love a solution that means I just run the whole workflow once.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Fuzzy Matching</title>
        <link>https://community.alteryx.com/discussion/1427938/fuzzy-matching</link>
        <pubDate>Wed, 07 Jan 2026 16:15:41 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>mfranchino</dc:creator>
        <guid isPermaLink="false">1427938@/discussions</guid>
        <description><![CDATA[<div><p>Hello - I am having an issue with Fuzzy Match.&nbsp;<br /><br />I have a data source that looks like:<br /><br /></p><table border="1"><tbody><tr><td>NAME</td><td>ADDRESS</td><td>NUMBER</td></tr><tr><td>AMBERFLORENCE</td><td>123 EXAMPLE</td><td>1234</td></tr><tr><td>TOM BRADY</td><td>456 EXAMPLE</td><td>4567</td></tr><tr><td>MIKE TROUT</td><td>789 EXAMPLE</td><td>7890</td></tr></tbody></table><p>And I have a list of Names that I want to Fuzzy Match for in NAME. For Example, my text looks like<br /><br /></p><table border="1"><tbody><tr><td>Name</td></tr><tr><td>T Brady</td></tr><tr><td>Florence</td></tr></tbody></table><p>I have the second table as a Text input in my Alteryx flow. I want to Fuzzy Match for the second table values in the first table. Please not my actual data set is much larger, I am just using this as an example.&nbsp;<br /><br />Thanks for the help!</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Wide SPSS-generated Excel file ETL in Alteryx - tips needed!</title>
        <link>https://community.alteryx.com/discussion/1407558/wide-spss-generated-excel-file-etl-in-alteryx-tips-needed</link>
        <pubDate>Tue, 12 Aug 2025 18:41:22 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>TeePee</dc:creator>
        <guid isPermaLink="false">1407558@/discussions</guid>
        <description><![CDATA[<div><p>Niche request here: does anyone have experience with the ETL of <em>very</em> wide SPSS-derived Excel data sets in Alteryx please?</p><p>We have questionnaire results in Excel files (derived from SPSS) which are over 16k columns wide, so in Alteryx just <em>looking</em> at a union tool, for example, can take minutes, and then there's the added complication of mapping the column header codes to their correct aliases...&nbsp;</p><p>We have a very simple working solution, which just Unions the Excel files and then where we deselect all the unused columns via a Select tool, but I feel we could do better, maybe through transposing?&nbsp; We have left the mapping to Tableau, where we've just assigned aliases.</p><p>Any advice would be much appreciated.&nbsp; Thanks so much in advance.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Handling different datatypes (Text, Numbers, plus CLOB, BLOBs) in a single flow</title>
        <link>https://community.alteryx.com/discussion/1427271/handling-different-datatypes-text-numbers-plus-clob-blobs-in-a-single-flow</link>
        <pubDate>Mon, 29 Dec 2025 18:47:55 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>SSaib</dc:creator>
        <guid isPermaLink="false">1427271@/discussions</guid>
        <description><![CDATA[<div><p>Hi everyone,</p><p>I&rsquo;m creating a workflow in Alteryx that will call a Macro on a scheduled basis. The Macro receives the following inputs via a Text Input tool:</p><ul><li><p>Source Oracle table name</p></li><li><p>Filter / WHERE clause (if applicable)</p></li><li><p>Destination database/schema name in Snowflake</p></li></ul><p>Normally, the Macro processing is straightforward: I use Input Data (Oracle) and Output Data (Snowflake) to establish connections, and a few Control Parameters allow the Macro to sequentially move a set of tables from Oracle to Snowflake.</p><p>However, this time I&rsquo;m encountering some CLOB and BLOB fields that require additional processing, and I need some guidance.</p><p>I have two main questions:</p><ol><li><p>Dynamic routing based on data type<br />a) How can I analyze or parse an incoming table dynamically to identify the data types of each column?<br />b) Once the data types are identified, how can I route them to different pipelines for processing? For example:</p><ul><li><p>Text, Date, Number &rarr; Default Snowflake datatypes</p></li><li><p>BLOB &rarr; Use the BLOB Convert tool</p></li><li>CLOB&nbsp;</li></ul></li><li><p>Handling CLOBs<br />The sample table I tested on doesn&rsquo;t have large CLOB values, but the workflow still times out at this step. I assume some additional processing is needed to move the CLOB. How should I handle this efficiently in Alteryx?</p></li></ol><p>I&rsquo;m relatively new to Alteryx and have only used a limited set of tools, so any tips, best practices, or example workflows for handling CLOBs and BLOBs in a scheduled Macro would be greatly appreciated.</p><p>Thank you in advance for your guidance!</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Row creation</title>
        <link>https://community.alteryx.com/discussion/1427059/row-creation</link>
        <pubDate>Tue, 23 Dec 2025 19:31:57 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>cvinju</dc:creator>
        <guid isPermaLink="false">1427059@/discussions</guid>
        <description><![CDATA[<div><p>i have the below data set. so what i want is to create rows anywhere NBD is NOT 1. for example, for 12/12, NBD = 3.&nbsp; i want to "fill the gap" by creating 2 rows. one for 12/13 and one for 12/14 with all the data from 12/12. the numbers of rows i need created would be determined by the number in the NBD column - 1.&nbsp;&nbsp;</p><p>start:</p><table><tbody><tr><td>Begin Date</td><td>NBD</td><td>INC RATIO</td><td>&nbsp;INT INC&nbsp;</td><td>&nbsp;AMORT&nbsp;</td><td>&nbsp;STGL&nbsp;</td></tr><tr><td>12/11/2025</td><td>1</td><td>4.52</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,474,492.49</td><td>&nbsp;&nbsp;&nbsp; 154,906.01</td><td>&nbsp;&nbsp; 12,987.71</td></tr><tr><td>12/12/2025</td><td>3</td><td>4.35</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,392,897.43</td><td>&nbsp;&nbsp;&nbsp; 189,854.47</td><td>&nbsp;&nbsp;&nbsp; (1,507.33)</td></tr><tr><td>12/15/2025</td><td>1</td><td>4.77</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,565,859.49</td><td>&nbsp;&nbsp;&nbsp; 172,362.66</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td></tr><tr><td>12/16/2025</td><td>1</td><td>4.33</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,402,171.60</td><td>&nbsp;&nbsp;&nbsp; 177,340.59</td><td>&nbsp;&nbsp;&nbsp; (4,281.20)</td></tr><tr><td>12/17/2025</td><td>1</td><td>5.34</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,705,424.20</td><td>&nbsp;&nbsp;&nbsp; 218,535.97</td><td>&nbsp;&nbsp; 18,488.18</td></tr><tr><td>12/18/2025</td><td>1</td><td>4.38</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,398,298.55</td><td>&nbsp;&nbsp;&nbsp; 195,066.07</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td></tr><tr><td>12/19/2025</td><td>3</td><td>4.71</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,533,445.90</td><td>&nbsp;&nbsp;&nbsp; 184,799.98</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0.78</td></tr><tr><td>12/22/2025</td><td>1</td><td>4.89</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,588,527.00</td><td>&nbsp;&nbsp;&nbsp; 200,739.36</td><td>&nbsp;&nbsp;&nbsp; (4,469.60)</td></tr></tbody></table><p>end state:</p><table><tbody><tr><td>Begin Date</td><td>NBD</td><td>INC RATIO</td><td>&nbsp;INT INC&nbsp;</td><td>&nbsp;AMORT&nbsp;</td><td>&nbsp;STGL&nbsp;</td></tr><tr><td>12/11/2025</td><td>1</td><td>4.52</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,474,492.49</td><td>&nbsp;&nbsp;&nbsp; 154,906.01</td><td>&nbsp;&nbsp; 12,987.71</td></tr><tr><td>12/12/2025</td><td>3</td><td>4.35</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,392,897.43</td><td>&nbsp;&nbsp;&nbsp; 189,854.47</td><td>&nbsp;&nbsp;&nbsp; (1,507.33)</td></tr><tr><td>12/13/2025</td><td>3</td><td>4.35</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,392,897.43</td><td>&nbsp;&nbsp;&nbsp; 189,854.47</td><td>&nbsp;&nbsp;&nbsp; (1,507.33)</td></tr><tr><td>12/14/2025</td><td>3</td><td>4.35</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,392,897.43</td><td>&nbsp;&nbsp;&nbsp; 189,854.47</td><td>&nbsp;&nbsp;&nbsp; (1,507.33)</td></tr><tr><td>12/15/2025</td><td>1</td><td>4.77</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,565,859.49</td><td>&nbsp;&nbsp;&nbsp; 172,362.66</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td></tr><tr><td>12/16/2025</td><td>1</td><td>4.33</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,402,171.60</td><td>&nbsp;&nbsp;&nbsp; 177,340.59</td><td>&nbsp;&nbsp;&nbsp; (4,281.20)</td></tr><tr><td>12/17/2025</td><td>1</td><td>5.34</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,705,424.20</td><td>&nbsp;&nbsp;&nbsp; 218,535.97</td><td>&nbsp;&nbsp; 18,488.18</td></tr><tr><td>12/18/2025</td><td>1</td><td>4.38</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,398,298.55</td><td>&nbsp;&nbsp;&nbsp; 195,066.07</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -&nbsp;&nbsp;</td></tr><tr><td>12/19/2025</td><td>3</td><td>4.71</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,533,445.90</td><td>&nbsp;&nbsp;&nbsp; 184,799.98</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0.78</td></tr><tr><td>12/20/2025</td><td>3</td><td>4.71</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,533,445.90</td><td>&nbsp;&nbsp;&nbsp; 184,799.98</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0.78</td></tr><tr><td>12/21/2025</td><td>3</td><td>4.71</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,533,445.90</td><td>&nbsp;&nbsp;&nbsp; 184,799.98</td><td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0.78</td></tr><tr><td>12/22/2025</td><td>1</td><td>4.89</td><td>&nbsp;&nbsp;&nbsp;&nbsp; 1,588,527.00</td><td>&nbsp;&nbsp;&nbsp; 200,739.36</td><td>&nbsp;&nbsp;&nbsp; (4,469.60)</td></tr></tbody></table></div>
]]>
        </description>
    </item>
    <item>
        <title>Batch macro to read 1000+ .xlsx files with varying schemas</title>
        <link>https://community.alteryx.com/discussion/1426520/batch-macro-to-read-1000-xlsx-files-with-varying-schemas</link>
        <pubDate>Wed, 17 Dec 2025 11:30:25 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>LizPerry</dc:creator>
        <guid isPermaLink="false">1426520@/discussions</guid>
        <description><![CDATA[<div><p>I have a flow with a Directory input to read in all files from a network folder, I use a batch macro to find all sheet names across all of the files &amp; then another batch macro to bring the data into the flow from each of those files, sheets.</p><p>Ideally I'd like to create a unioned view of the data BUT I'm struggling with the variances in format. All submissions should be the same format but various customers don't / won't comply. I'm happy to reject a number of submissions but some are simple fixes such as, the data doesn't start on row 1 so I need to look for row headings OR row headings are slightly different.&nbsp;</p><p>These are examples of some of the variances I'd like to fix. The batch macro which picks up all data from all sheets is failing when I bring it into my flow.&nbsp; The actual headers should be Distributor Account Number, Distributor Name, Date of Purchase, Customer Name, Material Reference Number &amp; Quantity Sold but if this appears on rows 1, or even on rows 6,7 etc I want to start my data collation from there and if the headers say something like Dist Acc No, I want to accept that.&nbsp;</p></div>
]]>
        </description>
    </item>
    <item>
        <title>TomTom Dataset Unlicensed Error</title>
        <link>https://community.alteryx.com/discussion/558914/tomtom-dataset-unlicensed-error</link>
        <pubDate>Tue, 21 Apr 2020 12:56:09 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>ERT</dc:creator>
        <guid isPermaLink="false">558914@/discussions</guid>
        <description><![CDATA[<div><p>Recently, I installed the following datasets according to my organization's internal protocols:</p><ul><li><span>TomTom Geocoder and Reverse Geocoder Macro, Address Points</span></li><li><span>Zip+4 Coder</span></li><li><span>TomTom Drivetime, Alteryx Maps</span></li><li><span>US Census 2010 SF1 Demographic Data</span></li><li><span>DNB Analytical File</span></li><li><span>Experian CAPE, ACS Demos (incl. PR) and Simmons Syndicated Survey</span></li><li><span>Experian Household and Individual Analytica File</span></li><li><span>Kalibrate Technologies Traffic Counts Sample</span></li></ul><p><span>Now, when I attempt to use the dataset in the 'Distance' tool to output drive times between multiple locations, I get the following error: 'Error: Distance (6): The data set "TomTom_US_2019_Q4" is unlicensed.'<br /></span></p><p><span>I was redirected to this forum by my organization to see how this issue may be resolved. Thank you very much for your assistance!</span></p></div>
]]>
        </description>
    </item>
    <item>
        <title>How to select columns dynamically using number of count</title>
        <link>https://community.alteryx.com/discussion/1426805/how-to-select-columns-dynamically-using-number-of-count</link>
        <pubDate>Fri, 19 Dec 2025 07:25:15 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>alt_tush</dc:creator>
        <guid isPermaLink="false">1426805@/discussions</guid>
        <description><![CDATA[<div><p>Hello,</p><p>I have a below data set in excel. Data is import from line 1. So Alteryx created the default field Name like Field1,Field2,Field3 etc.</p><table border="1"><tbody><tr><td>F1</td><td>F2</td><td>F3</td><td>F4</td><td>F5</td></tr><tr><td>ABC</td><td>EFG</td><td>PQR</td><td>XYZ</td><td>MNO</td></tr><tr><td>ABC</td><td>EFG</td><td>PQR</td><td>XYZ</td><td>MNO</td></tr></tbody></table><p>My requirement is the selection of field in output file should be dynamic. If i passed the count 2 from text then it should select only first two columns i.e. F1 and F2 rest should be ignore. If i pass the count 4 then it should select first 4 columns i.e. F1,F2,F3 and F4 from above table.</p><p>Once the column selected, the number of columns i need to rename from F1,F2,F3 header with Column1, Column2, Column3 etc.</p><p>How can i achieve this? Could you please help!</p><p>Your help is really appreciated .</p><p>Thank you in advance.</p></div>
]]>
        </description>
    </item>
    <item>
        <title>Support on Zoom?</title>
        <link>https://community.alteryx.com/discussion/1425607/support-on-zoom</link>
        <pubDate>Tue, 09 Dec 2025 21:24:31 +0000</pubDate>
        <category>Alteryx One</category>
        <dc:creator>egonzales</dc:creator>
        <guid isPermaLink="false">1425607@/discussions</guid>
        <description><![CDATA[<div><p>Hello,</p><p>I building a GIS map with FELT using census data information. I am interested in using alteryx to clean and reorganized the data set, but I do not have time to figure out how to use the platform. Is there someone who can get on zoom with me and teach/show me how to do what I need to do? Potentially could pay for your time&nbsp;</p></div>
]]>
        </description>
    </item>
   </channel>
</rss>
