Community Gallery

Post, download, and share all of your favorite tools and workflows — from Alteryx supported to user-built.
Introducing Alteryx Marketplace

Easily access verified, supported, and secure Add-Ons.

LEARN MORE
Comments
Alteryx1-Epista
5 - Atom

Hi Hiblet ! You are 100 percent right. I will use the 3.5 connector ! Thanks for the clarification

 

The model: `gpt-4` does not exist

taylorbyers
6 - Meteoroid

I keep getting this error. Any suggestions? The model: `gpt-4` does not exist

m_v
8 - Asteroid

Can we use the advanced data analysis (ex-Code interpreter) with this macro?

Hiblet
10 - Fireball

Hi @m_v , I believe that the code interpreter has been wrapped into the general endpoint for ChatGPT4, as far as I understand it.  I have just had a quick squint at the API documentation, and that still seems to direct most things to Completions, which is what the macro uses.  Hope that helps.  Steve

m_v
8 - Asteroid

advanced analysis.JPG

 Still unsure how to do this in the workflow - the data analysis lets you upload a file and then use a prompt to ask questions about the data and perform transformations (e.g. summarize).

 

question.JPG

 

Hiblet
10 - Fireball

@m_v  Ah, I see.  The functions that are available via the OpenAI web interface are not necessarily available via the API functions that Alteryx has to use.  For instance, you cannot yet upload files via the API.  Undoubtedly this will come, but at the moment, the API only has a text endpoint.  The files endpoint that is available is only for training AI's with data.  You might be able to put code in as text, and if you tell the AI that, maybe, what follows is C# or whatever language, it might be able to offer insight.  You would just need to add what you want summarised as text in the prompt string. 

m_v
8 - Asteroid

@Hiblet thanks, gotcha! I'll impatiently wait for that feature to  become available in the macro!

roryor11
5 - Atom

Hi, is there somewhere we can read more about the pricing models mentioned in the pricing model variable? i.e. Free, PayAsYouGo < 48hrs, PayAsYouGo.

 

Hiblet
10 - Fireball

Hi @roryor11 , This may be out of date now, as a lot has changed very quickly.  The pricing model just effects the throttling rate of the macro. In Free mode, the calls are highly throttled to comply with the free API's restricted throttle rate.  With PayAsYouGo, the throttling rate is increased so you can make more calls more quickly.  OpenAI used to have a grace period of 48 hours where you could make calls quicker than free but not at full speed, which might now have gone.  The short story is this: If you are paying for GPT4, use PayAsYouGo, and this will give you a high throttle rate.  If you start to hit problems where you get 429-Busy responses, drop back to "Free" and this will reduce the rate at which you make calls.  

Also, I think OpenAI now use token throttling, so they restrict how much data you can send and receive from the AI, based on how long your messages are.  If you are using very long prompts and you get long responses, you might have to work around it by slowing down your calls or breaking prompts into parts.  Hope that helps!