Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Improve Log files for simpler parsing

To get simple information from a workflow, such as the name, run start date/time and run end date/time is far more complex than it should be. Ideally the log, in separate line items distinctly labelled, would have the workflow path & name, the start date/time, and end date/time and potentially the run time to save having to do a calculation. Also having an overall module status would be of use, i.e. if there was an Error in the run the overall status is Error, if there was a warning the overall status is Warning otherwise Success.

 

Parsing out the workflow name and start date/time is challenge enough, but then trying to parse out the run time, convert that to a time and add it to the start date/time to get the end date/time makes retrieving basic monitoring information far more complex than it should be.

10 Comments
Hugo
9 - Comet

Having the file name in the title would be great too

ARich
Alteryx Alumni (Retired)
Status changed to: Under Review
 
fharper
12 - Quasar

I think it useful to point out that log files can exist on two levels, at least I am only aware of 2.

 

The first is system generated log files with a sequential number embedded to maintain uniqueness like "Engine x64_Log_1304355770.log"  and puts these in a folder you can configure or is defaulted.  

 

The second is the output view you see when running manually or that you capture when running from a batch file and redirecting output to a log file you define.

 

The first is certainly difficult to begin with because you can't easily identify by file name what flow or flow instance it refers to.  so a naming convention based on flow name and timestamp would be very helpful.  after that parsing and enhancements to content are basically the same.

 

I wrote a parser for log files and found it not difficult and there is a parser macro in the Crew macros, found after I wrote my own..., but I do see value in standardizing the output in some areas aside from system generated log files having meaningful names.

  • key info should have standard prefixes or Tag strings we can parse on as well as easily spot visually, flow path, flow name, start and end times, duration, maybe a few others.
    • For example I see the flow being run in full path with a standard prefix "Started running" in position 1 of row but I parse out the flow name from the path for analysis and reporting/notification purposes.  If the log posted some of the Engine constants like [Engine.TempFilePath], [Engine.WorkflowDirectory], [Engine.WorkflowFileName] then we would have info pre-parsed to a degree and ideally all in one area of messaging.
  • I know with the message tool, which I have started using, allows you to choose to a degree where/when it will appear in the stream of the output view, which we capture when running jobs from batch files to create logs.  From this I perceive the log is actually a composite of possibly several temp log streams.  If possible it would be nice to have the basic stats up front or at end, the point being in one place for a nice visual snapshot without scrolling up and down to pick these things out.  Not so much for parsing but for human eye.  Flow name, path, start, end, duration, rows read or written by tool all in one place and no jumping around.  Then let the current flow of messages happen as is so we can analyze order of events if needed.  This may be easy to do as a separate tracking log summary which is appended or prepended to the existing log presentation. 
  • Tangent thought...in the Message tool we can set the type of message and level to determine how it is formatted and if it resolves to the final output view if in a macro..If Alteryx does create a "Run Summary" section this would in effect create a new "type" and thus adds value to adding to the message tool to cause those messages flag as such to be formatted and grouped to that section of output view messaging.
  •  All of my thoughts above are still largely targeted toward a program parsing the data but there is another idea that may align well with Andrew's original request and that is for Alteryx to take the summary type info I spoke of plus any other elements others may add and create a single row "summary" log file with a name composed of flow name and timestamp that is a yxdb and has columns for the various key fields like flow path, flow name, start and end times and duration and high level error level.  This allows someone who does not need a deeper analysis to read a small file and see basic run info without parsing and see if it failed or not and drive reporting/alerts.
    • If they need more then they have to write a parser for more complex analysis and reporting.

The way we operationalize/automate our "production work" we need to know more than summary info, we analyze errors and warnings and analyze those messages for known and unknown reasons to determine if a failed job is actually ok or not and if it should be auto-restarted or a person responsible notified for research, etc.  for example if the job fails for known messages we associate with network or "system" issues, like DB connection lost, our scheduler will auto-restart every 10 minutes up to a specified number of attempts before alerting the on call person.  another case is when a flow using R-code generates a certain error, we know it is a "false" error from known issue with R-Code and we change the flag in our scheduler from failed to success and don't alert or restart.

 

So for us tweaks to the content and organization as mentioned are most appealing and beneficial to all but the summary log file idea may be a big win for Andrew and others. 

 

derekbelyea
12 - Quasar

Some further ideas for enhanced log functionality:

 

  • Provide user setting options for lite, standard, verbose and no logging
  • Make consistent use of delimiters within log files or better yet, save log files using JSON formatting
  • Tag event and message types for easier parsing
  • For verbose logging have options to:
    • Include Tool Name as well as ToolID
    • Include user settings
    • Include workflow configuration settings and values
    • Include tool settings 

 

 

 

martman
8 - Asteroid

I would like to have a simple error log that can run on the server.

This would be a simple log that only contains workflow name owner date/time and error/s.

Every time a scheduled workflow runs with error the log is place in a watch folder which can then be actioned on

 

fharper
12 - Quasar

@martman, you can do this now yourself leveraging existing functionality.  I built my own parser and there is one in Crew Macros.

 

I set the server setting to log all flows to a specific folder then have a flow to run every n minutes to parse any new logs since the last and rename them with flow and timestamp info so it is easy to identify them for manual reference.  

 

In the parsing I flag all errors and compare with known errors and provide alerts with recommendations where possible/appropriate and feed a stats file for reporting on errors and anomalies including run times.  I use run times to track min, max and avg so another flow in my homemade scheduler can assess running flows and alert if the are running unusually long or beyond know max. 

 

Tracking errors has been helpful in spotting issues tied to a number of things like McAfee causing timeout errors and network/communications issues at times otherwise not noticed.

 

It would be nice if it was part of the software but no need to wait and wish...

 

 

 

martman
8 - Asteroid
Thank you for your reply do you have a sample workflow I can study?

MartMan
fharper
12 - Quasar

You can start with the crew macro for one thing.  I no longer work at the org where I built this and so can’t offer examples.  My current org doesn’t even use Alteryx...🙁

so I can’t even whip out a quick draft of the approach but if you have a year of experience you probably know most of what you need and can find the rest in the community.  

first is to set the setting to turn on logging so the files are created in a folder you choose.

then simply write a flow to read the directory and then read each file and parse them and write results.

you can build dynamic command line statements to move/rename the logs as you see fit effectively clearing the source folder..

ss64.com is good source of command line info

 

ChrisTX
15 - Aurora

Here's another request related to this Idea:

 

https://community.alteryx.com/t5/Alteryx-Designer/Logging-workflow-metadata-after-run-to-Database/m-...

 

we need to log every run of a workflow as a new row in a database table.  Basically we need to output various elements of the results window into a SQL Server DB.  Such as, rows inserted/update, length of workflow run, rows queried, source db/file path, etc.

 

 

dbmurray
8 - Asteroid

Wondering if this idea ever got implemented. Looking at something like this now and keen to know how I can parse log files into a database table so we can keep an eye on scheduled workflow performance.