This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
The 2022.1.1.30569 Patch/Minor release has been removed from the Download Portal due to a missing signature in some of the included files. This causes the files to not be recognized as valid files provided by Alteryx and might trigger warning messages by some 3rd party programs.
If you installed the 2022.1.1.30569 release, we recommend that you reinstall the patch.
This is a fun one. I can think of a few ways to approach this problem, the trouble is I'm not sure if any are particularly efficient so I wanted to ask the pros.
Here's the set up:
I have a table that looks like this.
[Record ID] [Open_Date_Time] [Close_Date_Time]
My objective is to create an output that looks like this:
I'm pretty sure I'm already on the right path in that I've already created a table of every hour between the min and max record using gen rows. I assume regardless of the solution, I'd be using this as a frame to compute against.
The way I see it:
A) Append the frame table to the fact data, creating a distinct record for every possible date/time in the frame and every unique record in the fact data. Then simple formula (if frame_date_time >= [Open_Date_Time] and blah blah) and then I could summarize the results of the if then formula and I get my result. This "works" but my 'hello world' sample for this problem is about 20k records, after appending it's about 4 million records. That seems like if I expand the fact data I'm creating quite a few records and I worry about performance and stability. This might be unfounded, because I've never actually created a workflow that had issues with performance or stability - but it sure seems like creating a bajillion records on a uncontrolled append would do it....
B) I guess also, I could append a field for each of the combinations in the frame table and just perform the same test maybe using the field name and the multi field formula. But I think this would create the same problem as A only now just by adding an absurd number of columns. I'm still not totally clear if Alteryx prefers to be abused in terms of record count or column count so I'm not sure if there's any advantage to this over A.
C) I think i could implement A with a batch macro and keep the record a little more tidy, but I think I still run the risk of creating something that could be unwieldy.
- instead of generating all the possible hours, just generate the list of hours each row fits into, then Summarize (group by, count)
- there's probably a way to represent open time spatially, then use the spatial match tool... turn unix seconds into lat/lon, maybe? then connect those to the universe anchor and put the list of hours (also represented spatially) in the T anchor... this one's a stretch :)
Use a generate rows on your fact data to give one record for each hour between Start and End inclusively. Summarize this by hour counting the number of records. Join this summarized hourly fact table to your hourly frame on hour and union the joined records and any extra frame hours. Replace the null counts with 0
This should avoid any row explosion issues since you're joining on hour as opposed to appending the two datasets.
Edit: After rereading @clmc9601 reply, I realized that I just expanded on her first point. Sorry for the duplication.