The ability to harness large language models (LLMs) locally offers unparalleled advantages in terms of data security, cost efficiency, and rapid prototyping. By integrating Alteryx Designer with Ollama, users can seamlessly process documents, extract insights, and maintain complete control over their data—all without relying on external cloud services.
Alteryx Designer is renowned for its intuitive, drag-and-drop interface that simplifies complex data workflows. Ollama, on the other hand, empowers users to run the latest open-source LLMs directly on their local machines. This integration facilitates:
The integration process involves a series of steps within Alteryx Designer:
Watch the video walkthrough of these steps below:
To harness the power of local large language models (LLMs) with Alteryx Designer, you'll first need to set up Ollama and run the Gemma 3 model. This setup ensures that all data processing remains on your local machine, enhancing privacy and reducing reliance on external services.
Ollama is available for Windows, macOS, and Linux. Visit the official Ollama download page and select the installer appropriate for your operating system.
> curl -fsSL https://ollama.com/install.sh | sh
Ensure that the command-line interface (CLI) is installed during the setup process. After installation, you can verify the installation by running the following command in your terminal or command prompt:
> ollama
This command should display a list of available Ollama commands, confirming a successful installation.
Gemma 3 is a family of lightweight, multimodal LLMs developed by Google, optimized for local deployment. To run the base Gemma 3 model, execute the following command:
> ollama run gemma3
This command will download and initiate the default Gemma 3 model. If you prefer a specific model size, such as the 1B parameter variant, use:
> ollama run gemma3:1b
Available Gemma 3 model sizes include 1B, 4B, 12B, and 27B parameters. Choose a model size that aligns with your hardware capabilities. For instance, the 1B model is suitable for machines with limited resources, while larger models like the 27B variant require more robust hardware.
If you plan to integrate Ollama with Alteryx Designer via API calls, you'll need to start the Ollama server. This can be done by running:
> ollama serve
This command starts the Ollama server on localhost:11434, allowing you to send HTTP requests to the model. Ensure that this server is running whenever you intend to make API calls from Alteryx.
With Ollama and Gemma 3 set up on your local machine, you're now ready to run the demo workflow that integrate these tools with Alteryx Designer for efficient and secure data processing.
Ollama's API expects a specific JSON structure to process prompts effectively. A typical payload might look like:
{
"model": "gemma3:4b",
"prompt": "Why is the sky blue?",
"stream": false
}
In Alteryx, the Formula tool can be configured to generate this JSON dynamically:
'{"model":"gemma3:4b","prompt":"Why is the sky blue?","stream":false}'
This approach ensures that each PDF's content is appropriately formatted for the LLM to process.
After receiving the response from Ollama, the JSON Parse Tool in Alteryx can deconstruct the JSON into a tabular format. This structured data can then be seamlessly integrated into further analytical workflows, enabling tasks such as data visualization, reporting, or feeding into machine learning models.
Integrating Alteryx Designer with Ollama opens up a realm of possibilities for secure, efficient, and cost-effective data processing using local LLMs. This synergy empowers users to build robust workflows that can handle complex document analysis tasks entirely offline, ensuring data privacy and fostering rapid innovation.
For a visual demonstration of this integration, refer to the accompanying example workflow provided in this post.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.