Welcome to the Gallery. Please feel free to post and comment, and check out our FAQ!

Community Gallery

Post, download, and share all of your favorite tools and workflows — from Alteryx supported to user-built.
Introducing Alteryx Marketplace

Easily access verified, supported, and secure Add-Ons.

LEARN MORE
Comments
starkey
7 - Meteor

Thanks for the detailed documentation on this! I'm currently getting these errors when attempting to use the macro. Would you be able to provide any guidance on how to resolve these?

 

Error: OpenAI_ChatGPT_Completions (3): 3_Wrap_Payload_and_Digester_Macro (36): 2_Batch_Call_Macro (16): Record #1: 1_It_Retry_Macro (19): Iteration #1: 0_Sub_SingleCall_Macro (10): Tool #14: Error transferring data "https://api.openai.com/v1/chat/completions": SSL connect error
Error: OpenAI_ChatGPT_Completions (3): 3_Wrap_Payload_and_Digester_Macro (36): 2_Batch_Call_Macro (16): Record #1: 1_It_Retry_Macro (19): Iteration #1: 0_Sub_SingleCall_Macro (10): Tool #23: Tool #17: Failed to run external program "C:\Users\JSTARK~1\AppData\Local\Temp\Engine_36916_8c3ca753ed7ec44e9700fbb2fb70331a_\DosCommand.bat": The system cannot find the file specified. (2)

 

Thank you!

Hiblet
10 - Fireball

Hi @starkey, I will see what I can do.

 

The SSL Connect error is a general network failure, which would happen if your internet dropped out momentarily, or if the ChatGPT server on the other end failed to reply to you (due to networking or being down).

The "...cannot find the file specified" is possibly due to the macro not completing after the initial error.

 

Has the macro ever worked for you, or has it never worked?

 

If you restart Alteryx and check your internet connection is good, does this error recurr? 

 

Steve

starkey
7 - Meteor

Thanks for the follow up Steve. Ended up being a VPN firewall issue preventing the call, but now I'm getting this response: "You exceeded your current quota, please check your plan and billing details." I tried with both a work and personal OpenAI account credentials. Any ideas?

Hiblet
10 - Fireball

Yes, this is the dreaded end of the free trial period.  You only get a limited period where you can use the AI for free, timed from when you set up your account.  Were the accounts set up a while ago, or are they brand new?  If new, you should not be getting a problem, but it they have been set up for a while, they may have gone beyond the free period.  You could set up another account using a new email address and card, or you could move to a paid plan, perhaps.  Either way, the macro looks to be working correctly and that is at least a positive, thanks for the update.

starkey
7 - Meteor

Ahh yes, that's it. I see now that my trial had already expired. Thanks for the feedback!

Hiblet
10 - Fireball

No problem at all.  We would love to know more about your use-cases, maybe help out with trying to get you to your goals.  Best of luck!

Adrien_Sourdille
8 - Asteroid

Really great stuff! Thank you for creating this very useful Macro the potential use for it are very exciting...

AdamR_AYX
Alteryx Alumni (Retired)

Thanks for creating this.

 

I am finding that my results get truncated when I try to use it. Any ideas why?

 

This is my prompt

 

I need you to create me some sample data in delimited format. Use | characters for field delimiters. I need the data as a single line with no carriage return characters. Use ¬ to mark new lines. The data should have these fields: FirstName, LastName, DateOfBirth, City. There should be 25 rows.

 

But I only get 504 characters of response.

 

If I use the same response in the OpenAI website I get a full response which for this example is 921 chars.

Hiblet
10 - Fireball

Hi @AdamR_AYX, this could be due to the token limit.  If you have not specified a token limit, a default is assumed.  You can then over-ride that default value, using a Max_Tokens data value.  The default is 200 tokens, and this value is the sum of the prompt and the response.  Token use is a cost, which is why it is limited unless you want to over-ride it.  I think the model maximum is 8000 tokens (might be wrong, maybe 4000).  Try pushing the token limit on your calls up to 4000 and that may help, and let me know.  Good luck!

AdamR_AYX
Alteryx Alumni (Retired)

That fixed it! Thank you! :)

Hiblet
10 - Fireball

Awesome!  Thanks for confirming, it will be useful to anyone else having a similar problem.  I like your idea of getting the AI to generate test data for you, and I'm glad I could help.

dmytrofomenko
6 - Meteoroid

I have the same connection issue. 
It seems to behave differently if the AMP engine is used or not. As well, the connector works exactly as required on 2020.4.5.12471, but fails on 2023.1 (with AMP on - SSL issue, with AMP off - another issue). 

Hiblet
10 - Fireball

Hi @dmytrofomenko , Can you confirm the errors you are getting, please?  I usually run with defaults, which I assume is AMP off.  The SSL issue above was found to be Starkey's VPN firewall blocking the call.  If you have ever made a successful connection to OpenAI, then I do not think that can be the issue in this case.  If a connection is made and the call fails for some reason, diagnostic file is dropped to the Temp directory, and you can find a reference to that file in the message output for your flow.  If possible, can you provide screenshots of any errors you are getting, please?  Thanks.

dmytrofomenko
6 - Meteoroid

Sure, here you go. 

AMP ON:
AMP ON.png

 

Error: OpenAI_ChatGPT_Completions (64): 3_Wrap_Payload_and_Digester_Macro (36): 2_Batch_Call_Macro (16): Record #1: 1_It_Retry_Macro (19): Iteration #1: 0_Sub_SingleCall_Macro (10): Tool #14: Error transferring data "https://api.openai.com/v1/chat/completions": SSL connect error
Error: OpenAI_ChatGPT_Completions (64): 3_Wrap_Payload_and_Digester_Macro (36): 2_Batch_Call_Macro (16): Record #1: 1_It_Retry_Macro (19): Iteration #1: 0_Sub_SingleCall_Macro (10): Tool #23: Tool #17: Failed to run external program "C:\Users\...\AppData\Local\Temp\Engine_22784_9d5c6af5e733684f90758f4ed0c86133_\DosCommand.bat": The system cannot find the file specified. (2)

 


AMP OFF: 
AMP OFF.png

 

Error: OpenAI_ChatGPT_Completions (64): 3_Wrap_Payload_and_Digester_Macro (36): 2_Batch_Call_Macro (16): Record #20: 1_It_Retry_Macro (19): Iteration #1: 0_Sub_SingleCall_Macro (10): Tool #23: Tool #17: Failed to run external program "C:\Users\...\AppData\Local\Temp\Engine_32756_8ddd646e2774481887b36d5120725536_\DosCommand.bat": The system cannot find the file specified. (2)
Error: OpenAI_ChatGPT_Completions (64): 3_Wrap_Payload_and_Digester_Macro (36): 2_Batch_Call_Macro (16): Record #20: 1_It_Retry_Macro (19): The output connection "Output" was not valid

Hiblet
10 - Fireball

Hi @dmytrofomenko , thanks for that.

 

I found this issue with the download tool vs AMP...

 

https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Discussions/Download-tool-error/td-p/87741...

 

This may be the root of the AMP-on SSL error.  AMP is running things in parallel.  The macro moves heaven and earth to make only one OpenAI API call at a time, because the Open AI site is throttled and rejects burst traffic.  So, after thinking this through, this should definitely be running in AMP-off mode.  There is a way to detect AMP state, and error out at the start if AMP is on.  I will see if I can get the macro to do this, but it might take some fiddling.  For now, AMP should be off.

 

The AMP-off case is interesting.  Here, the macro is saying that it cannot find the DosCommand.bat file in the Temp folder.  Do you have the DosCommand.yxmc macro in the folder with the other component macros?  The DosCommand.yxmc macro is the thing that writes the DosCommand.bat file in the temp directory, so if the macro is missing, it might not be writing the batch file.  The second error is probably a knock-on from the first error.

 

For info, the DosCommand.yxmc macro just issues Dos commands that are piped into it.  In this case, the Dos command is a ping to a non existent server that times out, and this is used to hang up the sub-macro for a known period of time, so that it complies with OpenAI's throttling rules.

 

Note: If the workflow is run on Server and the mode is Safe, RunCommand tools can be blocked, but from the look of it you are not running on Server. [https://community.alteryx.com/t5/Alteryx-Server-Discussions/Run-Command-on-Alteryx-Server/td-p/90124...]

dmytrofomenko
6 - Meteoroid

Hi Hiblet, 

 

AMP being off is a reasonable option in this sense, I agree. 

 

Regarding the components of the macro, yes, they are all in the same folder. For testing purposes I open the same flow file in the same folder with 2020 and 2023 version. 2020 works, 2023 doesn't. I find it rather awkward as well. If you say that it should write the bat file in the temp, maybe it is the case that the 2023 version for whatever reason doesn't have the right to write into temp on my laptop? Not sure how to fix this though... or test it. 

And no, it is not the server, it is on my laptop.

Hiblet
10 - Fireball

Hi @dmytrofomenko

 

I have set up a very small workflow to test the DosCommand.yxmc macro, and I have put it on my google drive here...

 

https://drive.google.com/file/d/1_cgDw9HJV61V3KQS9yPdU8WL4Jo2ob0w/view?usp=sharing

 

This flow, DosCommand_Harness2.yxmd, expects the macro to be in the local folder where it runs.  All it does is call the macro with a "dir > dir.txt" command, but it also tries to write an output file to the temp directory, just as the macro does.

 

I think from your comments above that you are running two instances of Alteryx, side-by-side, on one machine - is that correct?  I am guessing here that only one of the installations has write privilege to the Temp working directory for Alteryx.  This mini test flow should work like the ChatGPT macro, in that it will try to write to the temp folder.  I would expect it to work in the same way as the ChatGPT macro, in that the 2020 version will work, and the 2023 version will not.

 

If you have time, can you tell me more about how you find the ChatGPT macro awkward, and maybe suggest how it could be improved?  User feedback is always welcome.

 

dmytrofomenko
6 - Meteoroid

Hi Hiblet, 

 

Thank you. Yes, two instances on one machine, correct. Could be indeed the privilege/rights management issue. 

 

I will download, test and will let you know. The "awkward" was about not the macro itself, but rather the strange way Alteryx fails/not fails on different versions. The macro is amazing )

 

Will get back to you in a while. Thank you! 

dmytrofomenko
6 - Meteoroid

Hi Hiblet, 

 

No errors, runs fine on 2023 version. 

 

dmytrofomenko_0-1687424061733.png

 

Hiblet
10 - Fireball

Hi @dmytrofomenko 

 

Hmm, that is strange, I would have put money on that test flow not working.  That leaves me a bit stumped.  I think it is rights, but I do not know how two instances on the same machine handle the issue of having their own temp directories.

 

I am running on 2022.1. I will update to 2023.1 and see if it breaks.

 

Thanks for clarifying the "awkward" comment. I am conscious that the interface is "busy" and the instructions are long, but the complexity of the ChatGPT settings forces me to expose all the variables to the user.  I appreciate that you have made it clear that it is not a problem with the macro interface, good feedback, thanks!  

Hiblet
10 - Fireball

Hi @dmytrofomenko , 

 

I updated my Alteryx to 2023.1, but the macro works.  That leaves me a little perplexed, I have no more ideas about how I can help.

 

My only thought is that, when you generate the error, the error message should have links to the temp folder.  You could right click the link and do "Open Folder", and then right click in the temp folder and try to create a new empty text file or similar.  I am pretty sure you should be able to make a file, as the folder will be in your user folder where you have full rights.  However, the folder may not exist, and that might be the problem.  Very hard to debug without access to your machine.

 

 

dmytrofomenko
6 - Meteoroid

Thank you for your help, @Hiblet 

 

No worries, for the moment I am fine with the old version, which still works.

I will try to investigate when I have some more time, will let you know if I get anything out of it. 

 

Regards,

Hiblet
10 - Fireball

Hi @dmytrofomenko

 

Thanks for your patience, and your engagement on this problem.  If needs be we can set up a Teams-or-similar type call and I can try to interactively debug with you on 2023.1  .  I am intrigued as to why this is not working, and I would like to find out, but if you are OK with working with the older version, then that is no problem.

 

Thanks for your comments and good luck.

DY1
7 - Meteor

Hi @Hiblet ,

 

fantastic connector, I have one query about the output returned from ChatGPT, is there a way to get the response back with the formatting (specifically line breaks) that are present if you enter the prompt in the ChatGPT Web UI.  Is the connector stripping them out or is there some additional config that needs to be done to retain that detail?

 

many thanks

Hiblet
10 - Fireball

Hi @DY1 , the connector is not stripping anything out.  Like any good coder, I am extremely lazy, and I am just pushing the response from ChatGPT back to the user.  Probably the web interface is prettying up the response for display.  Line breaks have to be escaped in the JSON format data that is used in the calls to the AI, so probably the web developers are formatting the responses so that they look nice. You might try asking ChatGPT to do something like return one item per line.  It may comply, but it will put a "\b" bit of text between responses.  In Alteryx, you could then substitute a real carriage-return-line-feed pair by using a Replace() function in a formula tool.  I can help with this if you need.

 

Thanks for the positive feedback and good luck,

 

Steve

lmirandam07
5 - Atom

Hello! I have a workflow for some content generation, it was working well but suddenly the macro began sending this message:

MicrosoftTeams-image (1).png

Any thoughts on this?

 

Thanks!

Hiblet
10 - Fireball

Hi @lmirandam07 , This error happens when the JSON data is malformed, and it originates at the Server.  The JSON data can be malformed if it contains illegal characters or has something else that makes it break the rules of JSON.  One possibility is the packet is too big and gets truncated.  When an error occurs, you should see in the macro progress messages that a Diagnostic file is dumped out to workflow temp directory.  If you right click the Diagnostic file write line, and do "Open Folder", you should be able to find the diagnostic file.  If you zip and post this file, I could take a look at it, or you could mail me direct on steve at continuum dot je (expressed phonetically to avoid automated crawlers harvesting my mail address).  I can then examine the JSON packet and see what is wrong, and hopefully get to the bottom of this.

lmirandam07
5 - Atom

Thanks for your help @Hiblet. Already sent you an email with the file. Hopefully you could help me

dmytrofomenko
6 - Meteoroid

Hello again. 
I am encountering a bit of an issue (apparently the problem is somewhere in connection/server/etc), when I create a queue of prompts, sometimes it gets stuck for a long time on one prompt (apparently not getting a response and retrying multiple times or waiting for a timeout). Is the timeout in this case something server driven, or could I circumvent this? I have about 400 prompts to run, and it would be easier not to wait on one for too long, but rather have all finished, and then re-run manually the ones which failed. 

Regards. 

Hiblet
10 - Fireball

Hi @dmytrofomenko ,  The "Attempts" setting controls how many retries the macro will go through before moving onto the next record.  If you set this to 1, each record will be tried just once, and you can then re-run any failures as another batch.  When there is a failure, a CSV file is created in the Alteryx temp directory, and in this file you will see the message that comes back from the server saying why it failed.  If most of your records eventually succeed, this may be because the server side is very busy.  I hope the Attempts setting is what you need, please let me know otherwise.  Steve.

dmytrofomenko
6 - Meteoroid

@Hiblet , thank you. This is what I ended up doing even before your recommendation )) I have investigated a bit, apparently on the API side there is a 600 sec timeout on the connection in case of no response, and this is not smth I could change in the the connector. In python api connection I could, but will end up just leaving the workflow running overnight and hope for the best.

Hiblet
10 - Fireball

Hi @dmytrofomenko , that is great to know.  In older versions of Alteryx you cannot change the default timeout, but the download tool has now been updated, and it can be manually set.  600 seconds is a very long time to wait for the server to respond - ten minutes, incredible.  When I have used ChatGPT interactively it has been very responsive, and even on complex tasks it has only taken under a minute.  Thank you for bringing this to my attention, always nice to learn something new.

isleiman
5 - Atom

Hi @Hiblet

I am using a csv of 2 rows and two columns, and when I run the tool, I get the following output:

 

1 Header Log: 429; You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors. 0 0 0

 

How do I solve this?

429.PNG

Hiblet
10 - Fireball

Hi @isleiman, this is an error coming from the ChatGPT server, saying that your usage limits have been exceeded.  This is most likely because your 3 month free access period has elapsed.  After three months, OpenAI require you to move to a pay plan.  Hope that helps,

 

Steve

isleiman
5 - Atom

Hi @Hiblet

 

I trust this message finds you well.

 

I am reaching out to you regarding an issue I have encountered while using the OpenAI GPT macro in Alteryx Designer. I have integrated the macro into my workflow example, and have loaded the Secret Key API and Organization ID from my ChatGPT 4 paid account. However, when executing the workflow with a test Excel file, I received the following error message:

 

Header Log: 429;

You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.

 

I find this situation perplexing as this error typically occurs with a free account, not a paid one. Additionally, I would like to mention that the workflow itself does not provide any errors upon execution.

 

isleiman_0-1703081804060.jpeg

 

Could you please assist me in resolving this issue? I have verified my plan and billing details, and everything appears to be in order. 

 

Your prompt attention to this matter is greatly appreciated.

 

Best regards,

Hiblet
10 - Fireball

Hi @isleiman , 

 

It is odd that you are getting this message on your ChatGPT4 paid account.

 

It might be that you are pushing the throttling limit too hard, or OpenAI may have reduced the throttling limit.  The macro has a couple of throttle settings.  On the main configuration pane for the macro, the third box is called "Select OpenAI Completion (Davinci) Pricing Model", and this effectively controls throttling.  With this box set to "Free Trial", the macro will wait between calls for a significant time, and this should mean that it never exceeds the throttling limit on a paid account.

 

Also, please check that AMP is turned OFF for this macro.  AMP makes Alteryx run in a multithreaded mode, and this can make the macro issue too many calls to the API.  If you click on the white canvas area you should get your workflow's overall configuration settings.  In the "Runtime" tab, there is a "Use AMP" setting and this should be unchecked.

 

Finally, I would suggest using the ChatGPT4 version of the macro, as a sort of A/B test.

 

I hope this brings you a solution, if not please let me know how you get on.

 

Thanks,

 

Steve

isleiman
5 - Atom

Hi Hiblet

 

I'm sorry for the delay in this message.

 

I'm on several projects and had to resume now.

 

The fact is that in the response I have the following error:

 

"RecordID HttpHeaderLog Response ResponseIndex
1 Header Log: 429; You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors."

 

I have carefully read the "OpenAI ChatGPT 3.5Turbo (Completions) Connector" macro and I had to change "text-davinci-003" model to "gpt-3.5-turbo" model, in the Model formula inside the macro because Text- davinci-003 is deprecated: https://community.openai.com/t/text-davinci-003-deprecated/582617

 

Additionally, I have read the free tier rate limits and seen the following: https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-free

 

MODEL RPM RPD TPM
gpt-3.5-turbo 3 200 40,000

 

Therefore, within the macro, I have changed WaitSeconds to 25 (seconds), to comply with RPM = 3. Additionally, I am using a Max_Tokens of 200 per row. And the AMP Engine is disabled, both in my workflow and in the OpenAI macro.

 

And my workflow response is still Error 429.

 

My goal now is to see that you respond correctly before increasing my quota.

 

Again, sorry for the delay and thank you for your attention.

 

Thanks,

Ismael, 

Hiblet
10 - Fireball

Hi @isleiman 

 

You are correct that you have to change the model.  This was updated in the package on the Community site, but you probably had a version from before this change was made.

 

Good also that you adapted the macro to comply with request rates.  If the request rate was the problem, I would expect the 429 to come up intermittently.  For example, overnight you would clear your rate count, and the first time you used the macro in the morning it would succeed.  After a period of use, the OpenAI server would then start responding with 429's.  However, the problem is not intermittent, so I do not think call rate is the issue.

 

The access to ChatGPT is only free for a fixed period, starting when you create your user identity.  I believe it was 3 months.  This is of course out of my control, and dictated by OpenAI.  Once this period elapses, they require you to set up a pay account.  The cost is not high, but it is a cost.  I would also encourage you to use the ChatGPT4 connector, because the performance of 4 is a large step up from 3.5.  3.5 is now considered legacy, and will be phased out at some point in 2024.

 

Apologies for the delayed response and I hope that helps.

 

Steve

stefaniadurdan
8 - Asteroid

Hello! 

I am trying to use this macro, but I think I am doing something wrong cause it does not return any results. I do not have any error, but the output is blank.

I only chose the Record ID & Prompt, cause I understand the other are optional. In my screenshot the Keys are not filled in for security reasons, but I had them when running the workflow.

 

I attach also what my data looks like before macro.

 

Does anyone has any suggestions?

 

OpenAI-1Screenshot 2024-04-18 172714.png

 

Open AI-2Screenshot 2024-04-18 172835.png

Hiblet
10 - Fireball

Hi @stefaniadurdan, Could you try adding browse tool onto the macro output, please?  I think there is this weird Alteryx thing, where if there is nothing downstream of a macro, it does not run.  It has caught me out before. 

 

You are right, the other elements are optional.  You will need you API key and OrgID, but I assume you have these and you have blanked them out, which is good, you should keep these details secret.

 

I hope the Browse tool brings the macro to life.

 

Steve

stefaniadurdan
8 - Asteroid

@Hiblet, still not working. Might it be something related to my company IT infrastructure, blocking somehow any API not pre approver per request? Even though I think there should be an error in that case, which is strange.

 

What I also found strange is that in Open AI it shows the API was used, so it is nothing related to Open AI itself, attaching the screenshot as well.

 

Any other thoughts?

 

OpenAi-3Screenshot 2024-04-18 194124.png

 

4-OpenAI-4Screenshot 2024-04-18 194351.png

Hiblet
10 - Fireball

Hi @stefaniadurdan , I would expect to see some records, and expect to see them erroring.  I can see your Sample tool there limiting the data record count to 3, which is sane and sensible for testing. 

 

If you click on the output of the Sample tool, do you definitely have 3 inbound records?  Sorry to ask, but I cannot for the life of me figure out how three records could be passed in, and none come out.

 

Please note that OpenAI are deprecating ChatGPT 3.5 on May 10th 2024. 

 

Office hours are finishing here, but I will be back in tomorrow.  Would you be able to do a Teams session or similar, so I could see your screen?  I understand that may not be possible due to perfectly legitimate IT restrictions, but if an NDA agreement is required, it can be signed no problem.  I would love to see what is going on.

 

Steve

stefaniadurdan
8 - Asteroid

Hello, again, @Hiblet . I am actually using ChatGPT 4, but I have used this chat probably by mistake. 

here is the input of the macro, 3 records fed into the macro, nothing as output. 

 

Not sure what could be the issue, but unfortunately I am not able to do any screen sharing, not even with NDA. Could be there any other way you could support? 

 

 

 

 

 

stefaniadurdan
8 - Asteroid

@Hiblet , one more thing that I noticed: I did a test with nothing filled in as Secret Key and organization ID and the results are the same: no output, no error. I suppose there should be an error that I did not filled in any key, right?

stefaniadurdan
8 - Asteroid

@Hiblet , I was looking into the macro itself and for the Macroinput it is chosen as file input "4_Outer_Macro" which does not appear in the package. Could this be the reason? Do you know why I am missing this part or if it actually not the cause of the error? 
The package is exactly the same for both Chat GPT 3.5 and Chat GPT 4.

 

Thanks!

Screenshot 2024-04-19 155117.png

Hiblet
10 - Fireball

Hi @stefaniadurdan, I can try to replicate here.  Please can you delete the posted image from your post above as soon as you can, to remove your credentials?  Steve

Hiblet
10 - Fireball

Hi @stefaniadurdan , The "4_Outer_Macro" was the development name of the macro, which I renamed on release to "OpenAI_ChatGPT_Completions", just to make it more friendly.  I have tried to replicate this here, with your credentials, and I get "The model `gpt-4-turbo` does not exist or you do not have access to it."  As a check, I manually made a call to the OpenAI API endpoint that says the models that are currently available, using your key, and it showed me a list of models that did not include "gpt-4".  Unfortunately, the GPT-4 service is pay-walled.  Early last year you had to join a queue to use GPT4, but I think now you can sign up and pay for it without restriction.  Even with your key, I still got a record out of the macro that gave me an error message, so I find it very odd that you get no records passing through.  I understand that you would not want to sign up for ChatGPT4 as a pay service if you are not sure that the macro is going to work.  If you do decide to sign up to ChatGPT4 as a pay service, make sure that you start a new account, as your credentials are showing in the post above.  This is not such a problem if the account above is not on the pay service, because it is just a trial account and has no credit card attached.  If you want, I could demo the macro to you on a sharing session, so that you do not have to share your screen.  If you provide the prompts, I could pass them through using my credentials and send you the results back.  My email is steve at continuum dot je, where I have used words for the punctuation characters to stop automated email harvesters pinching my mail address (and then spamming me to oblivion).  Hope that helps,  Steve