Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.

Community Gallery

Create, download, and share user-built tools and workflows.
Looking for Alteryx built Add-Ons?

Easily shop verified, supported, and secure Partner and Alteryx built Add-Ons on Marketplace.

LEARN MORE
Comments
smugabart
9 - Comet
9 - Comet

Thanks for sharing - I am looking forward to see it in my workflows 😀

Alteryx1-Epista
5 - Atom

Hello,

 

I keep getting that error : Header Log: 401; 

Any idea why ? 

Hiblet
10 - Fireball

HI @Alteryx1-Epista , 401 is an Unauthorized code.  The response string should have some more information about the error as well.  ChatGPT4 is restricted access due to high demand, and you have to join a wait-list to get access - has it been confirmed by OpenAI that GPT4 is available to you?  Thanks,  Steve.

Alteryx1-Epista
5 - Atom

Hi Hiblet ! You are 100 percent right. I will use the 3.5 connector ! Thanks for the clarification

 

The model: `gpt-4` does not exist

taylorbyers
6 - Meteoroid

I keep getting this error. Any suggestions? The model: `gpt-4` does not exist

Hiblet
10 - Fireball

Hi @taylorbyers - Are you getting the same thing as Alteryx1-Epista?  It could be the same issue.  I have just done a quick test and I can get to GPT-4, so I know the model is up.  You might need to put yourself on the waitlist for GPT-4.  It usually takes only a couple of days for them to add you to the beta access list.  Here is the link...

 

    https://openai.com/waitlist/gpt-4-api

 

Until then you should be able to use the v3.5 Connector.  Version 4 is very much better though, I have found that 4 responds in a much more predictable way and is better at following instructions.

 

Cheers,

 

Steve

mellisa346
5 - Atom

Thanks to this !!!    

 

 

 

 

 

 

myccpay.com

m_v
8 - Asteroid

Can we use the advanced data analysis (ex-Code interpreter) with this macro?

Hiblet
10 - Fireball

Hi @m_v , I believe that the code interpreter has been wrapped into the general endpoint for ChatGPT4, as far as I understand it.  I have just had a quick squint at the API documentation, and that still seems to direct most things to Completions, which is what the macro uses.  Hope that helps.  Steve

m_v
8 - Asteroid

advanced analysis.JPG

 Still unsure how to do this in the workflow - the data analysis lets you upload a file and then use a prompt to ask questions about the data and perform transformations (e.g. summarize).

 

question.JPG

 

Hiblet
10 - Fireball

@m_v  Ah, I see.  The functions that are available via the OpenAI web interface are not necessarily available via the API functions that Alteryx has to use.  For instance, you cannot yet upload files via the API.  Undoubtedly this will come, but at the moment, the API only has a text endpoint.  The files endpoint that is available is only for training AI's with data.  You might be able to put code in as text, and if you tell the AI that, maybe, what follows is C# or whatever language, it might be able to offer insight.  You would just need to add what you want summarised as text in the prompt string. 

m_v
8 - Asteroid

@Hiblet thanks, gotcha! I'll impatiently wait for that feature to  become available in the macro!

roryor11
5 - Atom

Hi, is there somewhere we can read more about the pricing models mentioned in the pricing model variable? i.e. Free, PayAsYouGo < 48hrs, PayAsYouGo.

 

Hiblet
10 - Fireball

Hi @roryor11 , This may be out of date now, as a lot has changed very quickly.  The pricing model just effects the throttling rate of the macro. In Free mode, the calls are highly throttled to comply with the free API's restricted throttle rate.  With PayAsYouGo, the throttling rate is increased so you can make more calls more quickly.  OpenAI used to have a grace period of 48 hours where you could make calls quicker than free but not at full speed, which might now have gone.  The short story is this: If you are paying for GPT4, use PayAsYouGo, and this will give you a high throttle rate.  If you start to hit problems where you get 429-Busy responses, drop back to "Free" and this will reduce the rate at which you make calls.  

Also, I think OpenAI now use token throttling, so they restrict how much data you can send and receive from the AI, based on how long your messages are.  If you are using very long prompts and you get long responses, you might have to work around it by slowing down your calls or breaking prompts into parts.  Hope that helps!