Free Trial

Community Gallery

Create, download, and share user-built tools and workflows.
Looking for Alteryx built Add-Ons?

Easily shop verified, supported, and secure Partner and Alteryx built Add-Ons on Marketplace.

LEARN MORE
Comments
smugabart
9 - Comet
9 - Comet

Thanks for sharing - I am looking forward to see it in my workflows 😀

Alteryx1-Epista
5 - Atom

Hello,

 

I keep getting that error : Header Log: 401; 

Any idea why ? 

Hiblet
10 - Fireball

HI @Alteryx1-Epista , 401 is an Unauthorized code.  The response string should have some more information about the error as well.  ChatGPT4 is restricted access due to high demand, and you have to join a wait-list to get access - has it been confirmed by OpenAI that GPT4 is available to you?  Thanks,  Steve.

Alteryx1-Epista
5 - Atom

Hi Hiblet ! You are 100 percent right. I will use the 3.5 connector ! Thanks for the clarification

 

The model: `gpt-4` does not exist

taylorbyers
7 - Meteor

I keep getting this error. Any suggestions? The model: `gpt-4` does not exist

Hiblet
10 - Fireball

Hi @taylorbyers - Are you getting the same thing as Alteryx1-Epista?  It could be the same issue.  I have just done a quick test and I can get to GPT-4, so I know the model is up.  You might need to put yourself on the waitlist for GPT-4.  It usually takes only a couple of days for them to add you to the beta access list.  Here is the link...

 

    https://openai.com/waitlist/gpt-4-api

 

Until then you should be able to use the v3.5 Connector.  Version 4 is very much better though, I have found that 4 responds in a much more predictable way and is better at following instructions.

 

Cheers,

 

Steve

mellisa346
5 - Atom

Thanks to this !!!    

 

 

 

 

 

 

myccpay.com

m_v
8 - Asteroid

Can we use the advanced data analysis (ex-Code interpreter) with this macro?

Hiblet
10 - Fireball

Hi @m_v , I believe that the code interpreter has been wrapped into the general endpoint for ChatGPT4, as far as I understand it.  I have just had a quick squint at the API documentation, and that still seems to direct most things to Completions, which is what the macro uses.  Hope that helps.  Steve

m_v
8 - Asteroid

advanced analysis.JPG

 Still unsure how to do this in the workflow - the data analysis lets you upload a file and then use a prompt to ask questions about the data and perform transformations (e.g. summarize).

 

question.JPG

 

Hiblet
10 - Fireball

@m_v  Ah, I see.  The functions that are available via the OpenAI web interface are not necessarily available via the API functions that Alteryx has to use.  For instance, you cannot yet upload files via the API.  Undoubtedly this will come, but at the moment, the API only has a text endpoint.  The files endpoint that is available is only for training AI's with data.  You might be able to put code in as text, and if you tell the AI that, maybe, what follows is C# or whatever language, it might be able to offer insight.  You would just need to add what you want summarised as text in the prompt string. 

m_v
8 - Asteroid

@Hiblet thanks, gotcha! I'll impatiently wait for that feature to  become available in the macro!

roryor11
5 - Atom

Hi, is there somewhere we can read more about the pricing models mentioned in the pricing model variable? i.e. Free, PayAsYouGo < 48hrs, PayAsYouGo.

 

Hiblet
10 - Fireball

Hi @roryor11 , This may be out of date now, as a lot has changed very quickly.  The pricing model just effects the throttling rate of the macro. In Free mode, the calls are highly throttled to comply with the free API's restricted throttle rate.  With PayAsYouGo, the throttling rate is increased so you can make more calls more quickly.  OpenAI used to have a grace period of 48 hours where you could make calls quicker than free but not at full speed, which might now have gone.  The short story is this: If you are paying for GPT4, use PayAsYouGo, and this will give you a high throttle rate.  If you start to hit problems where you get 429-Busy responses, drop back to "Free" and this will reduce the rate at which you make calls.  

Also, I think OpenAI now use token throttling, so they restrict how much data you can send and receive from the AI, based on how long your messages are.  If you are using very long prompts and you get long responses, you might have to work around it by slowing down your calls or breaking prompts into parts.  Hope that helps!

Jamel_Talbi
5 - Atom

Hiblet, I'm getting a 404 error that "The model `gpt-4` does not exist or you do not have access to it."  I tried buying credits but still get the same issue.

 

When I look in the API documentation I see the below list of models available and none of them are 'gpt-4' so I'm guessing this is now an error in the API call. Unfortunately, you haven't created an input/toggle for the model name and the API call macro is locked so I can't update it. Any thoughts for me? Thanks!


gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
gpt-3.5-turbo-instruct
90,000 TPM
gpt-3.5-turbo-instruct-0914
gpt-4.5-preview
gpt-4.5-preview-2025-02-27
gpt-4o
gpt-4o-2024-05-13
gpt-4o-2024-08-06
gpt-4o-2024-11-20
gpt-4o-audio-preview
gpt-4o-audio-preview-2024-10-01
gpt-4o-audio-preview-2024-12-17
gpt-4o-mini

Hiblet
10 - Fireball

Hi @Jamel_Talbi, You have been very pro-active and resourceful in trying to solve this, and you are almost there.  

 

There was a "gpt-4" when the macro was written, it is just that time has rolled forwards and gpt-4o is now the preferred model.  The model list does, and will continue, to change over time, so providing a dynamic way to stay in step would be difficult.  Also, each version might have changes that cause the macro to break.

 

I believe the mechanics of the API for gpt-4o are the same as those for gpt-4.  You should find that only the sub-macros are locked.  The top level macro should be open.  Open it, and you should find a nice friendly Continuum banner, and the macro itself.  Roughly in the middle of the macro is a formula tool that sets a field called "model".  If you change this to "gpt-4o" and save, the macro should work, and target the current model.

 

I have updated the package on this page to point to gpt-4o as the default model.  At some point I believe OpenAI are going to release gpt-5, and that will probably require some re-coding work.  Also, the current macro does not cope with nested JSON data, it will only work with vanilla string prompt data.  Sometime in the near future I will have to do a sprint refresh, and probably gpt-5 is a good time to do it.

 

Anyhow, let me know if this quick fix works or does not for you.  Also, if you have time, we would love to know your use case.

 

Many thanks,


Steve