I am thrilled to post this "one tool wonder" made possible by the new Python Tool in Alteryx Designer 2018.3.
Thank you to @IanCo, our Solutions Consultant, for steering me in this direction.
This post will show you how you can use the new Alteryx Designer Python Tool to implement the Qualtrics Python 3 API calls to download your survey data. (There is a template workflow attached below that includes all of the Qualtrics Python 3 code mentioned in this post for you to customize as follows.)
IMPORTANT: Before you run your workflow, you need to make one change to a file in the Alteryx Designer 2018.3 Python installation.
Enjoy!
Heather Harris
Alaska Airlines
Alteryx ACE
----- Cut and Paste This Python Code Into the Python Tool -----
----- Enter your Qualtrics API Token, Survey ID and Data Center ID -----
# Python 3
import requests
import zipfile
import json
import io
# Setting user Parameters
apiToken = "<YOUR_TOKEN>"
surveyId = "<YOUR_SURVEY_ID>"
fileFormat = "csv"
dataCenter = '<YOUR_DATA_CENTER_ID>'
# Setting static parameters
requestCheckProgress = 0
progressStatus = "in progress"
baseUrl = "https://{0}.qualtrics.com/API/v3/responseexports/".format(dataCenter)
headers = {
"content-type": "application/json",
"x-api-token": apiToken,
}
# Step 1: Creating Data Export
downloadRequestUrl = baseUrl
downloadRequestPayload = '{"format":"' + fileFormat + '","surveyId":"' + surveyId + '"}'
downloadRequestResponse = requests.request("POST", downloadRequestUrl, data=downloadRequestPayload, headers=headers)
progressId = downloadRequestResponse.json()["result"]["id"]
print(downloadRequestResponse.text)
# Step 2: Checking on Data Export Progress and waiting until export is ready
while requestCheckProgress < 100 and progressStatus is not "complete":
requestCheckUrl = baseUrl + progressId
requestCheckResponse = requests.request("GET", requestCheckUrl, headers=headers)
requestCheckProgress = requestCheckResponse.json()["result"]["percentComplete"]
print("Download is " + str(requestCheckProgress) + " complete")
# Step 3: Downloading file
requestDownloadUrl = baseUrl + progressId + '/file'
requestDownload = requests.request("GET", requestDownloadUrl, headers=headers, stream=True)
# Step 4: Unzipping the file
zipfile.ZipFile(io.BytesIO(requestDownload.content)).extractall("<YOUR_PATH>/MyQualtricsDownload")
print('Complete')
Hi Gooley,
Man this is a bug? Its happened to me everytime three times in a row with only that node on my workflow. Maybe i'll try running it into a browse with the adder code that will read the data and push it into the workflow and see if that works.
Anyone else?
I didn't do anything different but save the WF to my company gallery, it was able to succesfully run and publish the result to my server. I opened the WF from Gallery into designer and it had now retained the python script. Goofy stuff.
Everything is going great - I can get a csv file as output - but the only place it will go is to the Alteryx engine to a working directory for this workflow. No matter what I do to the path, python seems to ignore and continues to write to that working directory. Any ideas?
Can you safely copy your Python?
Sure - I've taken out some identifiers. Issues I am having - 1) extract path seems ineffective (output just going to the Alteryx Engine folders here - C:\ProgramData\Alteryx\Engine\750393c084fbae5e4e2795f2dc9e9f8d\2\MyQualtricsDownload). 2) I want to set parameters for startDate, endDate and useLabels. I can successfully pass date variable into my python code, so that is great - but they don't seem to impact the output. Same issue with useLabels - true or false, the output is the same. I'm reading up more on what parameters are available.
Other custom thing is proxy stuff at the top. Finally figured that out for my organization. Thank you in advance for your time and willingness to help. This thread has been great!
from ayx import Alteryx
# Python 3
import requests
import zipfile
import json
import io
import sys
import subprocess
import pandas as pd
import os
os.environ['http_proxy']=<removed>
os.environ['https_proxy']=<removed>
dat = Alteryx.read('#1')
startdate = dat['StartDateX'].iloc[0]
enddate = dat['EndDateX'].iloc[0]
print(startdate)
# Setting user Parameters
apiToken = <removed>
surveyId = <removed>
fileFormat = 'csv'
dataCenter = 'az1'
useLabels = True
# Setting static parameters
requestCheckProgress = 0
progressStatus = "in progress"
baseUrl = "https://{0}.qualtrics.com/API/v3/responseexports/".format(dataCenter)
headers = {
"content-type": "application/json",
"x-api-token": apiToken,
}
# Step 1: Creating Data Export
downloadRequestUrl = baseUrl
downloadRequestPayload = {
"format": fileFormat,
"surveyId": surveyId,
"startDate": startdate,
"endDate":enddate,
"useLabels": useLabels
}
downloadRequestResponse = requests.request("POST", downloadRequestUrl, json=downloadRequestPayload, headers=headers)
progressId = downloadRequestResponse.json()["result"]["id"]
print(downloadRequestResponse.text)
# Step 2: Checking on Data Export Progress and waiting until export is ready
while requestCheckProgress < 100 and progressStatus is not "complete":
requestCheckUrl = baseUrl + progressId
requestCheckResponse = requests.request("GET", requestCheckUrl, headers=headers)
requestCheckProgress = requestCheckResponse.json()["result"]["percentComplete"]
print("Download is " + str(requestCheckProgress) + " complete")
# Step 3: Downloading file
requestDownloadUrl = baseUrl + progressId + '/file'
requestDownload = requests.request("GET", requestDownloadUrl, headers=headers, stream=True)
# Step 4: Unzipping the file
zipfile.ZipFile(io.BytesIO(requestDownload.content)).extractall(r"c:\Users\rdfoulge\Documents")
print('Complete')
# Step 5: Read CSV into Data Frame
df = pd.read_csv(r"c:\Users\rdfoulge\Documents\JLL POU.csv", sep=",")
Alteryx.write(df,1)
Sorry for the delay, Becky. I'm just now getting back to this and will take a look this afternoon.
The only thing that I can see when comparing it to my own is that the drive letter in your path is lower case whereas mine is upper. I would be amazed if that were the issue, but it's worth a try I suppose.
# Step 4: Unzipping the file
zipfile.ZipFile(io.BytesIO(requestDownload.content)).extractall(r"C:\Users\rdfoulge\Documents")
print('Complete')
I just reran it and it worked. So strange. Does jupiter have some caching that I should know about? What it just not seeing my path change (and my other parameter changes? Do I need to open a new python tool every time I change it?
Thank you so much for taking time to help me!
So glad it's working now. I'll have to defer to others who are more technically inclined to answer your other questions. Best of luck!