Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Data Science

Machine learning & data science for beginners and experts alike.
Ryan_Merlin
Alteryx
Alteryx

The ability to harness large language models (LLMs) locally offers unparalleled advantages in terms of data security, cost efficiency, and rapid prototyping. By integrating Alteryx Designer with Ollama, users can seamlessly process documents, extract insights, and maintain complete control over their data—all without relying on external cloud services.

 

Why Combine Alteryx Designer with Ollama?

 

Alteryx Designer is renowned for its intuitive, drag-and-drop interface that simplifies complex data workflows. Ollama, on the other hand, empowers users to run the latest open-source LLMs directly on their local machines. This integration facilitates:

 

  • Enhanced Data Privacy: Process sensitive documents without transmitting data to external servers.
  • Cost Savings: Eliminate expenses associated with cloud-based LLM services.
  • Accelerated Prototyping: Quickly test and iterate on workflows without dependency on internet connectivity.
  • Optimized Performance: Leverage local GPUs to expedite processing tasks.

 

Workflow Overview: Connecting Directly To Local LLM API

 

The integration process involves a series of steps within Alteryx Designer: 

 

  1. Prompt Construction: Employ the Formula Tool to craft a JSON payload that encapsulates the extracted text, preparing it for the LLM API call.
  2. LLM Invocation: Use the Download Tool to send the JSON payload to Ollama's local API endpoint.
  3. Response Parsing: Apply the JSON Parse Tool to interpret the LLM's response and convert it into a structured format suitable for downstream analysis (apidog).

 

Watch the video walkthrough of these steps below:

 

 

Getting Started with Ollama and Gemma 3

 

To harness the power of local large language models (LLMs) with Alteryx Designer, you'll first need to set up Ollama and run the Gemma 3 model. This setup ensures that all data processing remains on your local machine, enhancing privacy and reducing reliance on external services.

 

Step 1: Download and Install Ollama

 

Ollama is available for Windows, macOS, and Linux. Visit the official Ollama download page and select the installer appropriate for your operating system.

 

  • Windows: Requires Windows 10 or later.
  • macOS: Requires macOS 11 Big Sur or later.
  • Linux: Installation can be done via a shell script: 
> curl -fsSL https://ollama.com/install.sh | sh

 

Ryan_Merlin_0-1759868583550.png

 

Ensure that the command-line interface (CLI) is installed during the setup process. After installation, you can verify the installation by running the following command in your terminal or command prompt:

 

> ollama

 

This command should display a list of available Ollama commands, confirming a successful installation.

 

Step 2: Run the Gemma 3 Model

 

Gemma 3 is a family of lightweight, multimodal LLMs developed by Google, optimized for local deployment. To run the base Gemma 3 model, execute the following command:

 
> ollama run gemma3

 

This command will download and initiate the default Gemma 3 model. If you prefer a specific model size, such as the 1B parameter variant, use:

 
> ollama run gemma3:1b

 

Available Gemma 3 model sizes include 1B, 4B, 12B, and 27B parameters. Choose a model size that aligns with your hardware capabilities. For instance, the 1B model is suitable for machines with limited resources, while larger models like the 27B variant require more robust hardware.

 

Step 3: Start the Ollama Server (Optional)

 

If you plan to integrate Ollama with Alteryx Designer via API calls, you'll need to start the Ollama server. This can be done by running: 

 
> ollama serve

 

This command starts the Ollama server on localhost:11434, allowing you to send HTTP requests to the model. Ensure that this server is running whenever you intend to make API calls from Alteryx.

 

With Ollama and Gemma 3 set up on your local machine, you're now ready to run the demo workflow that integrate these tools with Alteryx Designer for efficient and secure data processing.

 

Crafting the JSON Payload for Ollama

 

Ollama's API expects a specific JSON structure to process prompts effectively. A typical payload might look like:

 

{
   "model": "gemma3:4b",
   "prompt": "Why is the sky blue?",
   "stream": false
}

 

In Alteryx, the Formula tool can be configured to generate this JSON dynamically:

 

'{"model":"gemma3:4b","prompt":"Why is the sky blue?","stream":false}'

 

This approach ensures that each PDF's content is appropriately formatted for the LLM to process.

 

Parsing and Utilizing the LLM's Response

 

After receiving the response from Ollama, the JSON Parse Tool in Alteryx can deconstruct the JSON into a tabular format. This structured data can then be seamlessly integrated into further analytical workflows, enabling tasks such as data visualization, reporting, or feeding into machine learning models.

 

Best Practices and Considerations

 

  • Model Selection: Choose an LLM that aligns with your specific use case. Llama 3.1 is a versatile choice for general-purpose tasks.  There are newer models like Gemma3 that are smaller but still effective for specific tasks.
  • Resource Management: Ensure your local machine has adequate resources (CPU, GPU, RAM) to handle the processing demands of the selected LLM.
  • Data Formatting: Maintain consistency in the structure of your JSON payloads to facilitate reliable parsing and analysis.
  • Security Compliance: By keeping all data processing local, you enhance compliance with data protection regulations and organizational policies.

 

Conclusion

 

Integrating Alteryx Designer with Ollama opens up a realm of possibilities for secure, efficient, and cost-effective data processing using local LLMs. This synergy empowers users to build robust workflows that can handle complex document analysis tasks entirely offline, ensuring data privacy and fostering rapid innovation.

 

For a visual demonstration of this integration, refer to the accompanying example workflow provided in this post.

Comments
AlteryxMarco
Alteryx
Alteryx

This is AWESOME @Ryan_Merlin !! cant wait to see the next set of videos and thank you for the recording and packaged flows