In case you missed the announcement: The Alteryx One Fall Release is here! Learn more about the new features and capabilities here
Start Free Trial

Alter Everything Podcast

A podcast about data science and analytics culture.
Podcast Guide

For a full list of episodes, guests, and topics, check out our episode guide.

Go to Guide
AlteryxMatt
Moderator
Moderator

In this episode of Alter Everything, we sit down with Andrew Merrill, Alteryx product specialist and advocate, to explore best practices for integrating AI and LLMs into data analytics processes. Some topics we discuss include proven design patterns for generative AI, such as feedback loops, routing, and RAG architectures, and learn how to avoid common pitfalls like token overuse and data governance challenges. Andrew shares real-world use cases, tips for leveraging Alteryx Co-pilot, and strategies for prompt engineering to maximize workflow efficiency.

 

 

 

 


Panelists

 


Topics

 


Ep 198 (YT thumbnail).png

Transcript

Episode Transcription

Ep 198 Best Practices for Integrating AI Into Your Alteryx Workflows
===


[00:00:00] Introduction to the Podcast
---

[00:00:00] Megan Bowers: Welcome to Alter Everything, a podcast about data science and analytics culture. I'm Megan Bowers, and today I am talking with Andrew Merrill, product specialist and advocate at Alteryx. In this episode, we chat about using generative AI within Alteryx Designer, including best practices, common challenges, and use cases for new features like Gen AI tools and Alteryx copilot. Let's get started.


[00:00:35] Meet Andrew Merrill
---

[00:00:35] Megan Bowers: Hey, Andrew, it's great to have you here today. Could you give a quick introduction to yourself for our listeners?

[00:00:41] Andrew Merrill: Sure. My name's Andrew Merrill. I am an Alteryx product specialist and advocate here at Alteryx. Actually just started this year, so still fairly fresh to the role, but I've done a lot of things over the course of my life, including studying math and physics in undergrad. I went to medical school. I've worked for a construction company as a business analyst, and now, obviously, here at Alteryx, working as part of the data science team. So very passionate about data in all its forms, and I've seen it applied in a whole bunch of different fields. And then obviously with this AI boom, that's been a particular focus of mine, an area that I am also very passionate about.

[00:01:20] Megan Bowers: Yeah, definitely. I'm super interested to get your takes on AI and especially incorporating it into Alteryx workflows in this episode.


[00:01:28] Generative AI in Alteryx Workflows
---

[00:01:28] Megan Bowers: So I think a good place to start would be just, when we talk about Alteryx workflow development, there are kind of these common design patterns, and I know we have content on Community on maybe you're commonly taking the top five results that you're looking at and sorting or something like that. There's all sorts of design patterns. But when it comes to incorporating generative AI in Alteryx workflows, what are some good design patterns for that from your perspective?

 


[00:01:57] Design Patterns for Generative AI
---

[00:01:57] Andrew Merrill: So there's a few, and you can certainly break this down into more granularity, but one of the first things, really just putting the tool on the canvas, starting that engagement, and seeing what output you get is really the first step. And so the first design pattern would really just be an input into, for example, the Gen AI tools with a simple prompt. And then whatever you get out is useful information. Obviously, that's the simplest, and that's basically the bedrock of every other design pattern you're ever going to build in an Alteryx workflow.

Beyond that, of course, one of the big things now in terms of building out evals, building out structure, making sure that the data is of a high quality. You can now, for example, take your input and pass it into a second Gen AI tool, acting as a judge. Basically, LLM as a judge is the frequent language that's typically used. Or even within the Alteryx framework, you could build, for example, an iterative macro where you're now passing the data back in because it doesn't meet some process requirement. It doesn't meet some metric that you can set up ahead of time. Then it will kick it back through, run through the LLM again, and now you have enhanced output because you can capture new data every time it outputs, you're capturing that and feeding it back in.

So that would be the second design pattern is using this iterative cycle as a feedback loop to judge your output and precursively improve beyond that. Routing is a big one as well. So for example, take input data. We have some block of text that we need to evaluate, and depending on what the text is, we're going to send it down one of two or three different pathways. Now, for example, we could build complex logic to try and handle that, or in this case, nowadays with the amount of context and the variability in terms of what you can get as an input, we pass it into an LLM, and now the output of the LLM is going to be just a simple category: path one, path two, path three. Or, "This is an IT request versus this is an IT incident" that needs to be directed along certain channels. So we can have the LLM just output that one category and now a filter tool can branch and send the data either to the request section of the workflow or to the incident section of the workflow. And so that gives us the ability to control how we're processing, and that really begins to open up the kind of agentic process. Another really big thing nowadays.

Then the lastly, the other thing that's really big is this kind of RAG architecture, building in memory. So you have some external dataset that will add context, but you don't want to add the entire block of context to everything you possibly have because that can confuse your AI. You may get muddied results, especially with things like hallucinations. And so instead, what you can do, for example, take a labeled dataset where, let's just say as a simple example, you have five different prompts that could apply to your data. Again, you have some category that's assigned to it, so you can do a first pass through your LLM or through your Gen AI tool. In this case, the only thing we want to do is say, "Which of these five categories does our input fit into?" Based on that, we'll take that information only that we have in memory. We'll append that onto our request, and now we can do a second Gen AI tool where we have that added context to give more insight into how to appropriately handle that information or that input before we action on it.

[00:05:12] Megan Bowers: Yeah, those are super interesting, and I like the example you gave for the routing of, maybe it's IT support tickets and you're like labeling them and routing them for different parts of the workflow.


[00:05:23] Examples and Use Cases
---

[00:05:23] Megan Bowers: Do you have any other just examples maybe of the LLM as a judge or the memory thing that you've seen, like use case-wise?

[00:05:31] Andrew Merrill: One of the examples that we have is built a workflow for a weekly status report process. Every week we're trying to capture what work has been done throughout the organization, but then we also have this second data source, which is how people are being recognized in the organization. Somebody, my colleague, does a great job. I want to say, "Hey, great job." That's an entirely separate system that has nothing to do with our actual process, project tracking. Now people will describe what it is that they're grateful for. Say, "You did a great job on this project, the way that you helped me do whatever, opened up all these doors, and now our system is up and rolling exactly how we needed it. Thank you so much." So we have an AI process that takes a look at both sets of data and says, "These are all the active projects that we have, and these are all of the recognitions that different employees have sent to one another." Now we can marry those two together with AI, having that context analysis and say, "Okay, which recognitions are tied to which projects?" So we can surface that to the appropriate team and say, "Hey, these are your team members that are doing a great job," or "This is what people are saying about your project," or "People that have helped with it." We have a kind of cascading process that involves a number of those different design patterns or a routing saying, "Hey, if the recognition just says 'Great job,' that's not going to give us any information to tie to projects." We want to filter that out. So instead of maybe branching into two separate patterns, one branches into a dead end. The other feeds information forward, and that helps save on bandwidth and minimizing token usage throughout the process. But there's a number of different examples I could give, but I think that's a pretty good one of ways to implement these processes.

[00:07:07] Megan Bowers: That's a really interesting example. I like that. Kind of gives some more context.


[00:07:11] Common Pitfalls and Best Practices
---

[00:07:11] Megan Bowers: You also just mentioned bandwidth and tokens, which brings me to my next question, which was what are some common pitfalls to avoid when we're bringing LLMs straight into Alteryx Designer?

[00:07:24] Andrew Merrill: This is a great question. Think about our typical chat bot. So we think about ChatGPT, we think about Grok, we think about Gemini. I log in, I have this portal, I type in my question and get an answer. That is inherently constrained by the fact that you're the one typing in an input and you have to wait to read the output before you tag your next question. Now we open up the world of Alteryx. We're getting into these API calls where now I can send a thousand questions all at once. There's this an inherent need to be cautious with how we're approaching what data, one, what data are we sending in, or the privacy element to things of, "Do we have the proper authorization to do what we're doing?" But then beyond that, one of actually the silly examples that is relatable to some, hopefully not to too many, is the email tool actually on Alteryx. A lot of people initially don't recognize that every record is sent as an email, and so they'll have 300 records that they expect to be a table that's in an email, and now they sent out 300 emails instead, and now they have some explaining to do. So in this case, the same thing with the Gen AI tools. Every record is basically a separate LLM prompt, and so one, being cautious with how much data are you actually sending up? Are you sending a reasonable amount of tokens into the LLM of your choice so you can get good output, good results, but then making sure that, obviously, you're not breaking the bank or sending needless information into the ether.

[00:08:45] Megan Bowers: Yeah, definitely. And then what about, I don't know, auditability, governance? Like what are some of your tips there for making sure that you're building a good workflow?

[00:08:55] Andrew Merrill: Absolutely. This is an area I'm quite passionate about. I did a talk at Inspire about utilizing the testing tools, making sure that you have good structures in place, that the data matches what your expectations are. Same thing with AI, in fact, more so with AI than your deterministic processes. So one of the things that makes AI so powerful is its ability to hallucinate. The power part is that it gives you the ability to come up with new ideas, new concepts, connect to things that may not seem very simple to connect to in the first place, but obviously that comes with inherent risk as well. And so the first thing I think that's most important in terms of setting up these evaluations or making sure that you have good tracking is thinking about this at the very beginning of your process. What is it, one, that could go wrong? Two, what is it that you expect to happen as well?

One thing that is a very simple way to move forward that I think goes unrecognized a lot of times is building a sample dataset. A lot of times we have whatever our data looks like, and we just try and build a workflow to meet that process, get done whatever we need to get done, and then we assume that we're good. And that doesn't really give us any insight as to whether or not it's working for every edge case or for the main cases that we expect. And so putting together some, even if it's just a few record cases of, "Hey, if these fields look like this," I'm going to have a table for one product, a chair for another product. "It's going to be 52 of the chairs, and this table is red," or whatever else. "And I expect my LLM to output this." And that way, now whenever I need to test my workflow, whereas I'm iterating, especially in the context of AI, you're trying to do this prompt engineering or context engineering. If I change something, I need to know that everything that I built originally hasn't completely broken. So if I have this sample dataset, I can now just run it through, "Does my output match what I expected?" If yes, then we're good to go and we can move on.

[00:10:51] Megan Bowers: Yeah, that makes sense. And maybe if it doesn't, then what? It throws an error, or...

[00:10:55] Andrew Merrill: I mean, at that point, it's dealer's choice. So if you have a system or process, let's say that's a fairly large-scale effort, this is going to run regularly. We can build these audit trails, store them, or output them into a SQL database, and now we can track and run throughout of this large list of data to say, "Hey, are we meeting whatever metrics we expect? 90% of our data to meet this threshold." And if it doesn't, then flag somebody, email them, let them know, "Hey, our process is starting to dip or to wane. We need to go take a look at this system." Or absolutely, like you said, "Hey, we expect plain text here. We expect JSON as our output because this is what our target system requires." "We didn't get JSON as our output. I need an immediate error and I need to feed that back through the system in order to go again."

 


[00:11:41] Agentic AI and Workflow Automation
---

[00:11:41] Megan Bowers: I'm curious to hear, since you're sitting in the data science team and working with AI a lot, I want to know how you're thinking about agentic AI and what kinds of maybe elements to a workflow or a process would indicate it's moving in the direction of being agentic. Yeah.

[00:12:00] Andrew Merrill: There's actually a really good talk by an interview that Andrew Ng did at the LangChain Conference. He's a Stanford professor. He was a co-founder for Coursera. He's a specialist and expert in machine learning and AI. He's done a lot in this space. One of the things that he mentioned is that we get really hung up on the idea of agents as entities, and there's a lot of arguments that'll be had of, "Is this an agent? Is this not an agent?" And really that's counterintuitive or doesn't really help progress what we're trying to accomplish. And really, the focus should be on how agentic a system is. And the goal then is to start with whatever your process looks like currently and continue to push forward into more and more autonomous systems. And so if you think about it, an Alteryx workflow by itself has a degree of agent-ness. It is a system that exists outside of a person that can do some process. So immediately from, as an entirely human-run process to a computer handles this, we've taken the first step toward building this agentic framework.

Now we add on top of that these LLMs or Gen AI tools. Things that allow you to now more dynamically approach a problem of, "Okay, I have a workflow that has a general target goal." If you have a single Gen AI tool, that's more agentic than it was before. Now we have this extra step that takes in context. We don't know exactly what it's going to look like to start with. It processes that information in a way that we don't have guarantee in terms of what output it's going to be, which is what we want because of the input variability. Now we can send that to a number of different systems. Either, like we said before, with routing, we can send an email to one person versus a different person, whether it meets certain criteria. We can change or modify marketing materials in order to appeal to certain audiences or to make sure that we're maintaining certain cultural customs or that we meet whatever standard style guide we have as an organization or as a company. So there's a lot of variability that can happen, which is what we want in an agentic system. And then we can do something after that. And then you build on top of that, like we said, with these design patterns where you can now incorporate with iterative macros or chaining, basically creating a chain of thought effectively in workflow form where we have multiple cascading Gen AI tools that are feeding one into the next to enhance or enrich the data each step of the way. And so putting all of this together instead of just, "Hey, I have an agent now," "Hey, I had this process. It's now more agentic than it was, and I have this idea for new information I can bring in to make it even more agentic." It's not as much an end state as it is a spectrum that we're trying to move along.

[00:14:40] Megan Bowers: I like that because I do think it can be very, "Is this agentic or not?" But if you do look at it as more of a spectrum, and even just being in Designer, it's like you can make things more and more automated by adding different tools or adding in the emailing, adding in more LLMs. So, yeah, that makes a lot of sense. I like that.

[00:14:59] Andrew Merrill: The other thing, too, with that concept is that it can be intimidating for people. You hear "agent" and you think of this super complicated thing, like, "This is the realm of advanced IT developers or software engineers that have all this high-tech knowledge that I don't have, so there's no hope for me." When in reality, if you can put tools on the canvas, you can build an agent system that helps automate your processes in a way that is generalizable and accomplishes goals that you could not have done, one, without an automation tool like Alteryx, and two, without these LLM capabilities like the Gen AI tools provide.

 


[00:15:35] Alteryx Co-pilot and Prompting Tips
---

[00:15:35] Megan Bowers: So we've been talking a lot about Gen AI tools, about bringing LLMs into Designer, but shifting gears a little bit, I want to talk about Co-pilot and prompting specifically, sorry, I should say Alteryx Co-pilot, and there's a lot of...

[00:15:49] Andrew Merrill: Co-pilots out there nowadays. Yes.

[00:15:50] Megan Bowers: Many people have Co-pilots. But for Alteryx Co-pilot, what are some best practices for prompting that for users that have Alteryx Co-pilot? You recently wrote a blog on this, but what are some of your best practices?

[00:16:05] Andrew Merrill: So Co-pilot has improved tremendously, actually. The new release with data sampling and the way that they've implemented the reasoning capabilities, it is way, way better than it was before, and I'm quite impressed. There's a lot of Co-pilots that I've used in general, and the output is not usually great, and our Co-pilot has room to improve, to continue to grow, and I'm excited to see where it grows. But in terms of best practices approach to take, one of the things that I recommend generally is being very specific with what it is that you want to do. You don't have to prescribe every step that you need Alteryx Co-pilot to take. You do need to be specific in terms of what your end goal is and what realm of data you're working with. So giving it some field names. If you want specific things to happen, if you have a general context of data sampling, for example, only take a couple rows worth of data. If you know that I have several different categories of information, let Co-pilot know that because that will only help get better output. And then don't try to do too many things all at once. Give it a short series of steps so that it can take that information and not have too much context to work with, too many paths to take, risking that it takes the wrong one. Give it enough constraints that you'll get good results, and then play around with it.

It really depends on what your skill level is. For people that are more advanced users of Alteryx, I think Co-pilot does really well with workflow summarization, allowing you to document your workflows quite quickly. Really any workflow I have at this point, I'll just ask Co-pilot to summarize what it does, and then I'll tweak the output and leave that in comments and stuff on the workflows so that I don't have to think as hard about what the workflow is doing and how to phrase it in a way that communicates well to people without being too verbose or wordy. But for people that are less familiar with Alteryx, newer users, people that are still figuring things out and experimenting, trying new things, Alteryx Co-pilot is really good at explaining what tools to use for the kinds of processes that you're going to be doing. So, "I have two datasets. How do I put these together?" Like, "Hey, the Join tool that will find the fields that have similar names and then join that together for you automatically, so you don't have as much to do." But now you can see, "Hey, this is the tool that it used, this is how it joins everything together," and that gives you a good learning opportunity. Or if you have a random bug that you're getting of, "Hey, I'm getting this mismatched data types, or 'cannot join string fields to non-string fields,' what does that mean?" But then it's a Co-pilot, and it's really good. Or even again, with data sampling, it will figure out that's the area you're having and it can tell you how to fix it.

[00:18:37] Megan Bowers: Super cool. One of the things you said at first was, it's important to like, be specific with what you're looking for. When I think about, when I first started out with Alteryx, it was like, "Okay, here's this Excel report. I want you to replicate that exactly in Alteryx." And so I was like, I knew exactly what I was looking for. I wanted these six columns, and I wanted subheads, all this stuff. And so now that we're incorporating AI more, yeah, it's just kind of interesting takeaway for me that it's still so important what outputs you're looking for, especially when it's not deterministic every time.

[00:19:13] Andrew Merrill: Oh, it's critical. And that's one thing, too, that is fascinating to me. We, a lot of people will engage with AI as if it can read your mind. And if you think about it like a colleague, if you ask them to go get you some numbers and they don't have any idea what you're talking about, you're going to get bad data back from them, or you're going to get nothing from them because they get confused and they get busy with something else. And then they're just going to drop your request. AI is not going to drop your request, but it may make something up for you but doesn't meet the brief. And so yeah, the in specific is definitely really, really important. And that's where you get, like I said, with the memory pattern, bringing in good context as kind of the concept of the AI data clearinghouse that is talked about across the organization and externally is having this space where we have really clean data that we can operate off of and bring into all of our AI processes.

 


[00:20:01] Getting Started with AI in Alteryx
---

[00:20:01] Megan Bowers: So for listeners who maybe haven't experimented as much with AI, how would you recommend getting started trying new AI use cases?

[00:20:12] Andrew Merrill: Absolutely. The first thing I would say is just start playing around. Put a tool on the canvas. Pick a lightweight model that's not going to break the bank, like we talked about before, and just start putting some data in. See what you get out. See how you, if you change the prompt in different ways, do you get better outputs? Does it make more sense? How can you control? Because one of the key factors, too, specifically with Alteryx, if you are prompting an LLM of your choice with their standard interface, you yourself, your mind is doing the parsing of information. So you ask a question, it gives you these big paragraphs of response. You're going to read it and interpret that data. In Alteryx, that doesn't mean anything. If the data is just paragraphs of text and we don't expect to immediately output that, how do we do anything with that data? We're now basically in the same exact boat that we have now, this unstructured data, which is a large paragraph with information inside of it, but we can't do anything with it. And so finding ways to encourage or again, constrain the LLM. Like I said before, if you want JSON as your output, give it the exact structure you expect. If I need these three fields every time this runs, otherwise my downstream work is not going to work. And so that I think is something else to play around with. Making sure that you understand how to reference the right input, give it the proper amount of context, and then constrain the output so that it's useful for you.

One of the things, too, early on, especially with newer models, it's improved. Things like ChatGPT, for example, really like using Markdown. It gives you these big headers and whatever else. But when you pull that into Alteryx as plain text, now you have these hashtag symbols. For JSON, you'll get three tick marks at the beginning of it because it's trying to format it for your browser. In Alteryx, you don't want all of that. And so being able to prepare yourself ahead of time and know what you're going to get every time, so when you run your workflow, you expect it to run the same way every time. But we've talked about the inherent kind of variability that these LLMs can provide, or Gen AI tools can provide. So again, providing the constraints that allow you to get consistent output is crucial there.

[00:22:16] Megan Bowers: Yeah. And I've found from playing around with it myself, you'll say, "Okay, give this a score of one to 10 for how well this matches," maybe in the prompt. And then outputs, "The score here is seven," and it's like, "Okay, wait, no, I want that. I literally just want a number. Like really, just give me one number, please." So there's definitely some work to do around constraining the outputs once you start playing around with it. But...

[00:22:44] Andrew Merrill: And then as you play, the goal then is to start refining this to say, "Where can I apply these Gen AI tools? Where can I apply an LLM in my business processes?" So one of the things that's a really good general use case for LLMs is by category standardizations. You think about, I think one of the very classic examples is you have a bunch of country codes. Countries are a static list, 200 or so countries in the world. So we know that's fixed. They all have names, but my input may vary. Some may be abbreviated, some may be misspelled slightly. That's a really good place to use an LLM to generate a standardized list. And that's actually one of the blueprint tools that the Gen AI tool team has put together that can be leveraged or utilized, and you can build your own. The goal of those is to inspire creativity and say, "Hey, what's possible? Can you make this for yourself?" But in doing so, you give yourself kind of the proper mindset in general when you build with these LLM tools, or like I said, with Gen AI tools, the goal is to minimize their usage as much as possible to maximize the value and consistency of your process.

So I could start with whatever input I want and say, "Analyze this for me. Here's all my data. Try and join it." And you will very likely get complete garbage output. But the join, that's fixed. You know exactly what you want to join on every time. You can build your workflow in Alteryx to do that for you. And now really the only thing that you need to use your LLM for is that context processing, for doing the really heavy lifting that a typical workflow can't handle, and that pushes value to the next level. So take your standard Alteryx workflow and use the Gen AI tools to enhance it as opposed to using the Gen AI tools to try and replace parts of your workflow.

A super interesting use case that I've had the pleasure of thinking of and discussing with some different team members here is with terms of modeling, forecasting. One of the things that can be very easy to do is, "Hey, here's all this data. LLM, what do I expect the output to be next month?" That's not great. This is a very standard machine learning process protocol. You can build models to take care of for you that are much more robust and give you more accurate results. But now, how do, where do we find context that an LLM might enhance that? Well, at the very beginning of the process, take, for example, like a marketing campaign. So I want to know, is my organization going to be successful? Are we going to sell more widgets if we market to this specific sector or this particular pool of the market? And I have all my internal metrics. I have some focus group testing that I can apply to this that they may have take like different survey results. I have all this internal data, but now we have the ability to reach out. Let's say we can look at news articles that are being written and what does the political climate in a certain area look like? What are people in the local area talking about? That's a bunch of context that's completely generalized, very unstructured, very difficult to parse in a deterministic way, but something that an LLM is really good at taking in and giving you that added insight to. Now instead of trying to get the LLM to do everything, we can put it at the very beginning of the process and have a new process that says, "What does the environment outside of my organization look like? How receptive are people to new products or to new initiatives?" And now I can feed that in with all the rest of my internal data. And now I have enhanced my input and built a more powerful model to do these predictions.

[00:26:15] Megan Bowers: Yeah, that's really exciting. I like that use case. And I think the ability to like make workflows even more valuable, I find that exciting with the future of all of this.


[00:26:25] Future of AI with Alteryx
---

[00:26:25] Megan Bowers: But I'm curious, what makes you excited about the future of using AI with Alteryx?

[00:26:31] Andrew Merrill: Right now, we're at an inflection point culturally as a world population. So very similar to how the dot-com era when the internet was first coming into being and we're seeing all of these systems go, or even the, from analog to digital era when computers were first made and growing as people learned that they could process a lot more information than they ever could in their own minds into the internet age. And now the AI age is coming, and again, it allows us to enhance our processes to do more than we ever thought was possible before. I remember a lot of like documentaries and things on Deep Blue, for example, beating Gary Kasparov in chess. Everybody's like, "No, this is impossible. This can't be done. These grandmasters know too much." "No computer's going to be able to think to that level," and then it beat him. People just lost their minds. And then AlphaGo. Go is so much more complex than chess. Like it's, "There's no way it can beat Lee Sedol," who is the world champion at the time. "One of the best players of Go. It's like it's way too complicated. Computers will never be able to handle this level of information," and then it beat him. People lost. It's like, "What does it mean to be human? This is crazy." And now a lot of people nowadays are like, "There's no way AI is going to be able to do this." It's like, "Ah, just wait. This is not the first time this has happened, or we've been at this crossroads." I'm super excited to see where that takes us.

And specifically in the realm of Alteryx, the ability of Alteryx to take business processes and automate them is super duper important. So much of what any organization or company does is woefully inefficient because it takes people's time, people's focus to do very mundane tasks. People, a lot like these LLM tools, are very creative and they have a lot to offer beyond just clicking a button every day. And so being able to take those processes where people will ordinarily just push it, and it may be a series of color buttons, it depends on what the weather is and whatever else, like all that context doesn't need to weigh on a person. We can shift that off to an LLM, this really complicated machinery that can simplify business processes for any organization. Marrying that with Alteryx so that you again, enhance these workflows to do even more than we ever could before, is just so, so impressive.

One of the ways that I have thought about this, too, tying hopefully things in a circle, is like making coffee in the morning. So if you have your one of a coffee machine, you're spending your time making coffee. For some people it can be kind of therapeutic, just doing that by itself. You have a number of steps, whether you want fresh beans, you have to grind them and then put them in your coffee maker, get water, et cetera. You get an Android housebot. It doesn't make any sense for you to put that house robot, your Android, into doing all the same steps that you did because it could spend its time better doing your dishes, going out to mow your lawn, et cetera. When in reality, like an Alteryx workflow, you can have a dispenser that sits on top of the coffee machine. It grinds the beans for you. If you can connect it to filtered water input, does, you have multiple data sources that now feeding into your coffee machine? All you need the Android to do is to take that finished product and take it over to you wherever you are in the house. That's the complex element of, you could be in the living room, sitting on the couch on a Saturday morning watching TV with your kids. You could be at the office working. There could be obstacles in the way. That's what the Android is built to do. But let the coffee machine do all the rest of the work to make the coffee and let the LLM do what it needs to do to enhance the process and get that coffee to you.

[00:29:57] Megan Bowers: That's a great example.

[00:29:58] Andrew Merrill: So the same way with Alteryx and these Gen AI tools. Yeah.

[00:30:01] Megan Bowers: Yeah. I like that metaphor. That's awesome example. And I think a great way to wrap up.


[00:30:06] Conclusion and Wrap-Up
---

[00:30:06] Megan Bowers: Definitely makes me feel excited about the future with all of this. So thanks again, Andrew, for coming on the show and for sharing all of your best practices and expertise.

[00:30:16] Andrew Merrill: I appreciate it. I had a great time with you, Megan. This is a lot of fun. Topic I'm super passionate about, and go enjoy yourself a cup of coffee, play with the Gen AI tools and Alteryx.

[00:30:26] Megan Bowers: Thanks for listening. To learn more about topics mentioned in today's episode, head over to our show notes on alteryx.com/podcast. And if you like this episode, leave us a review. See you next time.


This episode was produced by Megan Bowers (@MeganBowers), Mike Cusic (@mikecusic), and Matt Rotundo (@AlteryxMatt). Special thanks to @andyuttley for the theme music track, and @mikecusic for our album artwork.