Important Community update: The process for changing your account details was updated on June 25th. Learn how this impacts your Community experience and the actions we suggest you take to secure your account here.

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.

API Error Workaround

Dr_Dust
5 - Atom

I have a dozens of data sources being compiled into a single metric report, but periodically my API data sources error-out and break the entire workflow. They come back online in a day or two, but that can be a long time to wait... The API data is less significant than other data, and can be 'old'. But it is interconnected and must be present. Having said that, I would like to incorporate a backup .yxdb file that gets used when the API's are unavailable, and the same file would then be updated when it is available. Of course this can be achieved by manually changing a few connections, but is it possible to automate it so the workflow can be run via the server?

1 REPLY 1
grossal
15 - Aurora
15 - Aurora

Hello @Dr_Dust,

 

welcome to the Alteryx community! I'm happy to help you and I have good news for you: Yes it it possible.

 

I have built similar functionality into a lot of macros for customers. Let's call the API data 'customer data' for this example.

 

We have a 'Get Customer data' macro that connects to the API - in case a HTTP code besides 200 is returned, we load a local backup file, in case we get HTTP 200 code, we override the backup file with the most recent data. In both cases, a valid data set is returned to the main workflow.

 

A schematic of this macro could look like this:

 

grossal_0-1648941521419.png

 

What happens? And why?

First we connect to the API and (try to) download the data. We have a test tool connected to the download tool. It check the HTTP code - if something else than 200 is returned, we'll write a warning to the workflow (e.g. 'API didn't return data, a backup is used').

Afterwards we load a backup of the bottom and union it to the data that we downloaded and use a filter tool that only passes HTTP 200 rows.

 

We have 2 cases:

1) Download data AND backup have HTTP 200

2) Download data has a different status code and only the backup has HTTP 200

 

In both cases, we take only the first row. In case 1), that means that we keep the downloaded data, in case 2) that means we take the backup data.

 

Afterwards we write the data to the macro output and the backup file. 

 

Important

For this to work, it is important, that we have the backup file saved on a fileshare, so that the workflow allows knows which file to reference.

 

 

There are a lot of similar ways to achieve the same with more or less complicated procedures. I often built nested macros due to other required processes, but the base concept is always the same. Let me know what you think :-)

 

 

Best

Alex

Labels