Hi Alteryx Community,
I am working on creating a Batch Macro to retrieve contact information using the following API:
https://api.hubapi.com/contacts/v1/contact/vid/" + [vid] + "/profile?property=firstname&property=lastname&property=email&property=phone
Scenario:
What I've Done So Far:
Input Data:
I have a list of vids in a column, stored as part of my workflow input.
API Configuration:
The API endpoint dynamically appends the vid to the URL. I plan to construct the URL using a Formula Tool:"https://api.hubapi.com/contacts/v1/contact/vid/" + [vid] + "/profile?property=firstname&property=lastname&property=email&property=phone"
Authentication:
I have already set up an API key and tested the endpoint successfully using a cURL tool.
Goal:
Create a Batch Macro that:
Challenges:
Control Parameter Setup:
I understand that the Control Parameter tool is used to dynamically pass values (in this case, the vid values) into the macro, but I’m not sure how to configure it correctly.
Macro Workflow Structure:
I’m unsure how to set up the tools within the macro for:
Batch Configuration:
How do I ensure the macro processes the data in batches or iteratively handles all vids?
Please help me.
Thank you!
You don't need a batch macro per se here---> you can 1) set up your auth. 2) append it to your main dataflow. 3) dynamically create your urls. 4) use a recordid 5) process via download tool. Your issue is timeout/concurrency and how do you get this process to work most efficiently.
Pagination should not be an issue here because you are calling each property individually. Pagination tends to be an issue with Odata apis and situations where you retrieve a catalog in a retrun -> in your case each response is for a specific entry.
Thank you for the quick response @apathetichell
Could you please help me out how to use record_id to achieve my goal? How will the implementation of record_id work?
each of your 300K values needs a recordID ---> this is so that you can do the manipulation of your values/download data after your API call and you can link the data back to the api call. You'd just add a recordID. As a warning ---> running 300K api calls can be a fairly long process.
Hi @apathetichell sorry that I'm still unable to understand.
Is it possible for you to explain with a demo workflow which can clear my doubt.
Thanks!
I'm sorry -> what's your question here? Your workflow has 300K records which you are using to create 300K endpoints for your API? give them a recordid via the recordId tool.
the recordID will make parsing the api response (JSON/XML) easier after your API call.