Easily shop verified, supported, and secure Partner and Alteryx built Add-Ons on Marketplace.
LEARN MOREINTRO
Warning: This macro uses throttling and requires AMP to be turned OFF.
Note: Specifically targets model o3-mini, and allows you to provide an optional JSON schema to receive structured responses.
Note: Please talk to Continuum Jersey about getting a tailored macro for private Azure OpenAI instances.
This macro will allow you to send data to OpenAI's ChatGPT AI.
This version of the macro is a connector to ChatGPT model o3, a lighter model focused on STEM and coding, with conditional reasoning. The logo for the macro has been updated to include the ChatGPT version number "o3".
The ChatGPT AI has multiple API (Application Programming Interface) functions or "endpoints", for generating images, writing code, answering questions or completing phrases. This Connector points to the Chat Completions endpoint, which takes a natural language prompt string, and tries to reply, answering your questions or chatting with you.
With Alteryx, you are not going to necessarily be chatting, but you could ask the AI to take your data and create natural language messages with it, or summarise the info that you send. The AI is so powerful it is hard to imagine all the uses it could be put to, but this connector will allow you to experiment with it and find new ways to make it work for you, and your business.
Note: You will need an OpenAI charging account.
QUICK START
To use this macro, you first need to go to...
... and sign up to the API. Here, you will need to create a new API account, ensuring that you have access to the o1 model. Then you create an API Secret Key, and you will be given a Personal Organisation ID. The macro will have two text boxes where you enter your Key and OrgID. Once the macro has the identifying Key and OrgID, very little else is required. The OrgID is optional, if you do not need to use it, and you can remove the default text and leave the config option in the macro blank if you wish.
The main macro is called OpenAI_ChatGPT_Completions.yxmc. There are other macros in the package. These are required but can be ignored, they are sub components of the main macro. To use the macro, open the YXZP file and choose to save the expanded files somewhere you will find them easily. As with any other macro, right click on the canvas and "Insert Macro", and then browse to where you expanded the YXZP file to, find the OpenAI_ChatGPT_Completions.yxmc file and select that.
For input to the macro, all you have to provide is a record ID and a prompt string. The other variables are optional, and you can ignore them initially.
The Record ID is just an Int32 value, and the RecordID tool can quickly create this for you. This allows you use a Join tool to hook your responses to your initial prompts, if you wish to process the responses in downstream flows or systems.
The Prompt string is an instruction or phrase in normal English, such as "Tell me what the weather is like in Putney" or "Write a limerick about a man from Nantucket". Of course, the beauty of Alteryx is that you can build multiple prompt strings (one per record), perhaps containing info about your customers for example, and ask ChatGPT to write personalised emails to those customers.
IN DEPTH GUIDE
This link takes you to an introduction on the Completion endpoint, and contains background info on how best to make your prompts...
https://platform.openai.com/docs/overview
The link below details the values you can send to the AI to get a more tailored response.
https://platform.openai.com/docs/api-reference/completions
Here are the variables that the macro requires you to provide...
- Wait Seconds (REQUIRED) (Double); This is the amount of time in seconds to wait between each call to the API, to respect the throttling requirements of OpenAI. OpenAI vary their throttling so you just need to increase this value until you stop getting 429-Busy responses from the API.
- RecordID (REQUIRED) (Int32); A simple record ID so that you can join response records back to your input records.
- Prompt (REQUIRED) (String); This is the text that ChatGPT's demo website accepts, and it is effectively where you "say something to ChatGPT". This is where you build a statement, that may be built using your own Alteryx data.
- Reasoning Level (Selection); A selectable variable to control how long the model spends reasoning. Options are high, medium and low. Default option is medium. High reasoning requires more tokens and takes longer to respond.
Here are the variables that are optional...
- Max_Completion_Tokens (OPTIONAL) (Int32); The API is free-form in its responses, and might witter on a bit. OpenAI use the concept of tokens, as approximations for words, to limit the size of the reply. The default value in the macro is 2000 tokens, and you can override this by putting your own value here.
- ResponseResults (OPTIONAL) (Int32); You can ask the API to give you multiple replies with this value. If you use 0 for Temperature_0to1, then each reply will probably be the same, so this value defaults to 1 for a single response. If you use a high temperature value like 0.9, each response might be different, so you could select the most appropriate or creative response from the selection of responses.
- Attempts (OPTIONAL) (Int32); ChatGPT is very popular. As such it is sometimes busy, or it may crash. This Attempts value sets how many tries the macro should make to get a good response, and it defaults to 5 attempts per request. This is usually enough to ride over the minor temporary outages that occur, but you can raise this value if it is important for you to get a response for each prompt.
- Stop (OPTIONAL) (String); ChatGPT's Completions endpoint is designed to respond a little unpredictably, as a human might do. You can ask it to cease it's response when it naturally generates a particular string, for example ". ", to indicate the end of a sentence. Again, the API documentation describes this better, and the macro defaults to not sending this string unless you override it here.
- ResponseFormat (OPTIONAL) (String); This field allows you to add a JSON schema for the response, so that you can process the response back in Alteryx and out to other systems or outputs. For example, here is a very simple JSON Schema that tells the AI to return a JSON object that has a property "ai_response", that is of type string...
{
"type": "json_schema",
"json_schema": {
"name": "response_schema",
"schema": {
"name": "response",
"type": "object",
"properties": {
"ai_response": {
"type": "string"
}
},
"required": ["ai_response"]
}
}
}
The data returned using this schema would look like this ...
{"ai_response":"There was a young man from Nantucket..."}
OPERATION
The main outer macro calls some submacros. If you copy the main macro, please remember to also copy the numbered sub-macros and DosCommand macro with it, or re-expand the package in your target folder. The macros have relative directory addressing, so they will attempt to call each other in the same workflow folder as the main outer macro.
EXAMPLE
Below is a screenshot showing an example flow.
Note here that the o3 model has returned JSON format data.
THROTTLING
Most API's do not have infinite resources, so they ask you to comply with a request rate. OpenAI change their throttling rates depending on their fancy, and as such I cannot specify a Wait Seconds value that will always work. If you make too many requests per second, you will get a 429-Busy response, so increase your Wait Seconds parameter until you can reliably call the API without breaking throttle limits.
OpenAI also have a token limit, and you will get a specific error referencing your breaking of this limit when it happens. Trim back the Max Completions Token value to stay under the token limit.
SUMMARY
So, the most exciting and revolutionary new technology is available, and Alteryx can be used to hook up to it and begin leveraging it. The amazing flexibility of Alteryx allows it to shape data to the requirements of a particular API, and process the responses back into your workflows and onward to downstream systems.
Any problems with the macro or questions, please reply below.
CONTINUUM
This macro is the product of research by Continuum Jersey. We specialise in the application of cutting edge technology for process automation, applying Alteryx and AI tech for creative solutions. If you would like to know how your business could apply and benefit from Alteryx, and the agility and efficiency it provides, we would like to talk to you. Please visit dubDubDub dot Continuum dot JE , or send an email to enquiries at Continuum dot JE .
This macro requires some adaptation for use with Azure OpenAI private instances. Please talk to Continuum Jersey if you wish to set up a private instance of an OpenAI server and use a macro built for your business that authenticates to your private instance.
DISCLAIMER
This connector is free, for any use, commercial or otherwise. Resale of this connector is not permitted, either on it's own or as a component of a larger macro package or product. Please bear in mind that you use it at your own risk. ChatGPT is a beta demonstration of technology, and the services' performance and availability might be inconsistent. This macro connector is just an HTTP bridge to the service, and it will either get a 200-OK response or some kind of error. This connector macro is not responsible for the quality of the returned data. Please observe all usual data protection rules for your business in your jurisdiction when interacting with web services.
The download link doesn't seem to be working. The link appears to be a link to this page itself!
Might be a problem with the website (ie not in my control). The page has 7 downloads so far, so some people have been able to download. My email is steve at continuum dot je, so ping me a mail if you still cannot obtain, I will direct send you the zip.
Have updated the link to use yxzp format file, rather than zip.
... and yep, think that was it, the page seems to block zip files, which is odd, as yxzp is literally a zip file.
Yep. Works fine now.
Awfully silly to block zip but not yxzp. Maybe they don't want people posting locked/encrypted zips?
Great, thanks for letting me know!