Easily shop verified, supported, and secure Partner and Alteryx built Add-Ons on Marketplace.
LEARN MOREINTRO
Warning: This macro uses throttling and requires AMP to be turned OFF.
Also: Scroll to the end to see how to use ChatGPT-4o, the latest version.
This macro will allow you to send data to OpenAI's ChatGPT AI.
This version of the macro is a connector to ChatGPT4. The original Completions macro connected to ChatGPT version 3.0, and an update connected to v3.5 . The logos for the macros have been updated to include the ChatGPT version number.
The ChatGPT AI has multiple API (Application Programming Interface) functions or "endpoints", for generating images, writing code, answering questions or completing phrases. This Connector points to the Completions endpoint, which takes a natural language prompt string, and tries to reply, answering your questions or chatting to you.
With Alteryx, you are not going to necessarily be chatting, but you could ask the AI to take your data and create natural language messages with it, or summarise the info that you send. The AI is so powerful it is hard to imagine all the uses it could be put to, but this connector will allow you to experiment with it and find new ways to make it work for you, and your business.
Note: ChatGPT4 is currently in wait-list Beta. You will need an OpenAI account, and you will need to register the account on the wait list to get access, at least until v4 is publicly released. There may not be a "free" option yet, so thoroughly check the possible cost of usage on OpenAI's site. ChatGPT4 has more consistent responses, which will help Alteryx users process responses to extract salient data. It also follows instructions more thoroughly.
QUICK START
To use this macro, you first need to go to...
... and hit the "Sign Up" button. Here, you will need to create a new API account. Then you create an API Secret Key, and you will be given a Personal Organisation ID. The macro will have two text boxes where you enter your Key and OrgID. Once the macro has the identifying Key and OrgID, very little else is required.
The main macro is called OpenAI_ChatGPT_Completions.yxmc. There are other macros in the package. These are required but can be ignored, they are sub components of the main macro. To use the macro, open the yxzp file and choose to save the expanded files somewhere you will find them easily. As with any other macro, right click on the canvas and "Insert Macro", and then browse to where you expanded the yxzp file to, find the OpenAI_ChatGPT_Completions.yxmc file and select that.
For input to the macro, all you have to provide is a record ID and a prompt string. The other variables are optional, and you can ignore them initially.
The Record ID is just an Int32 value, and the RecordID tool can quickly create this for you. This allows you use a Join tool to hook your responses to your initial prompts, if you wish to process the responses in downstream flows or systems.
The Prompt string is an instruction or phrase in normal English, such as "Tell me what the weather is like in Putney" or "Write a limerick about a man from Nantucket". Of course, the beauty of Alteryx is that you can build multiple prompt strings (one per record), perhaps containing info about your customers for example, and ask ChatGPT to write personalised emails to those customers.
IN DEPTH GUIDE
This link takes you to an introduction on the Completion endpoint, and contains background info on how best to make your prompts...
https://platform.openai.com/docs/guides/completion/introduction
The link below details the values you can send to the AI to get a more tailored response.
https://platform.openai.com/docs/api-reference/completions
Here are the variables that the macro requires you to provide...
- RecordID (REQUIRED) (Int32); A simple record ID so that you can join response records back to your input records.
- Prompt (REQUIRED) (String); This is the text that ChatGPT's demo website accepts, and it is effectively where you "say something to ChatGPT". This is where you build a statement, that may be built using your own Alteryx data.
Here are the variables that are optional...
- Max_Tokens (OPTIONAL) (Int32); The API is free-form in its responses, and might witter on a bit. OpenAI use the concept of tokens, as approximations for words, to limit the size of the reply. The default value in the macro is 200 tokens, and you can override this by putting your own value here.
- Temperature_0to1 (OPTIONAL) (Double); Temperature is a value between 0 and 1, that reflects how "creative" you want the AI to be. If you are engaged in serious factual work, 0 is the suggested value, should produce deterministic repeatable results, and this is the macro default value. If you want the AI to write jokes or lyrics, or be a bit scatty and flighty, you can increase this value up to 1.
- ResponseResults (OPTIONAL) (Int32); You can ask the API to give you multiple replies with this value. If you use 0 for Temperature_0to1, then each reply will probably be the same, so this value defaults to 1 for a single response. If you use a high temperature value like 0.9, each response might be different, so you could select the most appropriate or creative response from the selection of responses.
- Attempts (OPTIONAL) (Int32); ChatGPT is very popular is very busy and is in beta. As such it is sometimes busy, or it may crash. This Attempts value sets how many tries the macro should make to get a good response, and it defaults to 5 attempts per request. This is usually enough to ride over the minor temporary outages that occur, but you can raise this value if it is important for you to get a response for each prompt.
- Stop (OPTIONAL) (String); ChatGPT's Completions endpoint is designed to respond a little unpredictably, as a human might do. You can ask it to cease it's response when it naturally generates a particular string, for example ". ", to indicate the end of a sentence. Again, the API documentation describes this better, and the macro defaults to not sending this string unless you override it here.
OPERATION
The main outer macro calls some submacros. If you copy the main macro, please remember to also copy the numbered sub-macros and DosCommand macro with it, or re-expand the package in your target folder. The macros have relative directory addressing, so they will attempt to call each other in the same workflow folder as the main outer macro.
EXAMPLE
Below is a screenshot showing an example flow.
Note here that ChatGPT4 has followed the instructions more closely than v3.5, and returns only a numeric value. This removes the need to post process the response. This also costs less, as long responses use more tokens, and charges are rated per token used.
THROTTLING
Most API's do not have infinite resources, so they ask you to comply with a request rate. ChatGPT's rate is 20 requests per minute on the free trial. The third box in the macro defaults to "Free Trial", and this adds a 3 second pause between each request, to comply with thier throttling requirements. You have other options in this third selection box, but 99% of us are going to be on the default "Free Trial" setting.
OpenAI also have a token limit, which is very generous. As this is a demonstration connector, the token limit has been ignored, in the view that most people will easily stay well away from the maximum amount of tokens returned. If however you send or receive very long prompts or responses, and you fire multiple requests for long periods, you may bust this limit and get a 400 type response from the API.
SUMMARY
So, the most exciting and revolutionary new technology is available, and Alteryx can be used to immediately hook up to it and begin leveraging it. The amazing flexibility of Alteryx allows it to shape data to the requirements of a particular API, and process the responses back into your workflows and onward to downstream systems.
As new technologies come online, they will have their own API's, and Alteryx will be perfectly placed to connect data into these new systems to come. We all know that Alteryx is a simple but powerful toolkit for rapid development, so you know that if you invest in Alteryx, your investment will keep you in step with the revolutions to come that we cannot foresee.
Any problems with the macro or questions, please reply below.
CONTINUUM
This macro is the product of research by Continuum Jersey. We specialise in the application of cutting edge technology for process automation, applying Alteryx and AI tech for creative solutions. If you would like to know how your business could apply and benefit from Alteryx, and the agility and efficiency it provides, we would like to talk to you. Please visit dubDubDub dot Continuum dot JE , or send an email to enquiries at Continuum dot JE .
DISCLAIMER
This connector is free, for any use, commercial or otherwise. Resale of this connector is not permitted, either on it's own or as a component of a larger macro package or product. Please bear in mind that you use it at your own risk. ChatGPT is a beta demonstration of technology, and the services' performance and availability might be inconsistent. This macro connector is just an HTTP bridge to the service, and it will either get a 200-OK response or some kind of error. This connector macro is not responsible for the quality of the returned data. Please observe all usual data protection rules for your business in your jurisdiction when interacting with web services.
UPDATES
- This macro now returns prompt, response and total token counts.
- Now tolerates backtick characters in prompt text.
ADDENDUM
If you want to use ChatGPT-4o, you can edit the OpenAI_ChatGPT_Completions.yxmc macro and find the Formula tool that sets the Model parameter. Change this to "gpt-4o" and the macro will use the new model. Note that it is O for Omni, and not a zero.
Thanks for sharing - I am looking forward to see it in my workflows 😀
Hello,
I keep getting that error : Header Log: 401;
Any idea why ?
HI @Alteryx1-Epista , 401 is an Unauthorized code. The response string should have some more information about the error as well. ChatGPT4 is restricted access due to high demand, and you have to join a wait-list to get access - has it been confirmed by OpenAI that GPT4 is available to you? Thanks, Steve.
Hi Hiblet ! You are 100 percent right. I will use the 3.5 connector ! Thanks for the clarification
The model: `gpt-4` does not exist
I keep getting this error. Any suggestions? The model: `gpt-4` does not exist
Hi @taylorbyers - Are you getting the same thing as Alteryx1-Epista? It could be the same issue. I have just done a quick test and I can get to GPT-4, so I know the model is up. You might need to put yourself on the waitlist for GPT-4. It usually takes only a couple of days for them to add you to the beta access list. Here is the link...
https://openai.com/waitlist/gpt-4-api
Until then you should be able to use the v3.5 Connector. Version 4 is very much better though, I have found that 4 responds in a much more predictable way and is better at following instructions.
Cheers,
Steve
Can we use the advanced data analysis (ex-Code interpreter) with this macro?
Hi @m_v , I believe that the code interpreter has been wrapped into the general endpoint for ChatGPT4, as far as I understand it. I have just had a quick squint at the API documentation, and that still seems to direct most things to Completions, which is what the macro uses. Hope that helps. Steve
Still unsure how to do this in the workflow - the data analysis lets you upload a file and then use a prompt to ask questions about the data and perform transformations (e.g. summarize).
@m_v Ah, I see. The functions that are available via the OpenAI web interface are not necessarily available via the API functions that Alteryx has to use. For instance, you cannot yet upload files via the API. Undoubtedly this will come, but at the moment, the API only has a text endpoint. The files endpoint that is available is only for training AI's with data. You might be able to put code in as text, and if you tell the AI that, maybe, what follows is C# or whatever language, it might be able to offer insight. You would just need to add what you want summarised as text in the prompt string.
Hi, is there somewhere we can read more about the pricing models mentioned in the pricing model variable? i.e. Free, PayAsYouGo < 48hrs, PayAsYouGo.
Hi @roryor11 , This may be out of date now, as a lot has changed very quickly. The pricing model just effects the throttling rate of the macro. In Free mode, the calls are highly throttled to comply with the free API's restricted throttle rate. With PayAsYouGo, the throttling rate is increased so you can make more calls more quickly. OpenAI used to have a grace period of 48 hours where you could make calls quicker than free but not at full speed, which might now have gone. The short story is this: If you are paying for GPT4, use PayAsYouGo, and this will give you a high throttle rate. If you start to hit problems where you get 429-Busy responses, drop back to "Free" and this will reduce the rate at which you make calls.
Also, I think OpenAI now use token throttling, so they restrict how much data you can send and receive from the AI, based on how long your messages are. If you are using very long prompts and you get long responses, you might have to work around it by slowing down your calls or breaking prompts into parts. Hope that helps!