This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
We're excited to announce that we'll be partnering with Credly starting October 19th - see what this means and read the announcement blog here!
Be sure to review our Idea Submission Guidelines for more information!
Some customers would like to log all inputs & outputs that go into each model. The goal is to save every JSON request and response with minimal (or no) impact to latency.
I agree that this would be helpful - and if this could be a switched capability then this would be very useful for testing or logging.
Reason why I mention the switching is that the return from an API can contain client-sensitive data which then will fall foul of data retention requirements such as GDPR so we'd need to turn this off on our prod environments for the responses.
Prediction analytics exists in 18.3+
This feature has been implemented as of 2018.3
Hey @RossK and @DavidCo
Would you be able to provide more detail about how to enable this - i.e. on a standard workflow, enable full verbose logging of the inputs that go into a predictive tool for each run?
Where would this data be logged - I presume it would be in Mongo - which collection would these inputs be logged in?
Thanks for asking, I'm happy to add more detailhere.
As of 2018.3, Promote has implemented both monitoring and prediction logging.
For monitoring, Promote tracks and reports total requests, requests by response code (200, 400, etc), and average latency for the past 28 days. Users are able select varying time windows to see the reporting period of most interest to them. That data is stored on the Promote cluster in an InfluxDB database.
For request logging, Promote tracks the inputs and outputs of every request for up to 14 days and offers an interface for searching for specific requests, by Id, time, or response code.
In both cases, data can be extracted from Promote using a batch ETL job and stored indefinitely.
Happy to discuss any additional questions you might have!
🙂 David - that is very encouraging and very helpful.
It would be very useful if some of this technology could be ported across to the server product - since there's a similar situation for regular Alteryx Canvasses (where it would be useful for a canvas in some situations to turn on detailed logging of inputs to an API)
Thank you for taking the time to reply - learned something new and useful today!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.