Hi,
This is a fun one. I can think of a few ways to approach this problem, the trouble is I'm not sure if any are particularly efficient so I wanted to ask the pros.
Here's the set up:
I have a table that looks like this.
[Record ID] [Open_Date_Time] [Close_Date_Time]
My objective is to create an output that looks like this:
[Date_Time_By_Hour] [Open_Records]
I'm pretty sure I'm already on the right path in that I've already created a table of every hour between the min and max record using gen rows. I assume regardless of the solution, I'd be using this as a frame to compute against.

The way I see it:
A) Append the frame table to the fact data, creating a distinct record for every possible date/time in the frame and every unique record in the fact data. Then simple formula (if frame_date_time >= [Open_Date_Time] and blah blah) and then I could summarize the results of the if then formula and I get my result. This "works" but my 'hello world' sample for this problem is about 20k records, after appending it's about 4 million records. That seems like if I expand the fact data I'm creating quite a few records and I worry about performance and stability. This might be unfounded, because I've never actually created a workflow that had issues with performance or stability - but it sure seems like creating a bajillion records on a uncontrolled append would do it....
I guess also, I could append a field for each of the combinations in the frame table and just perform the same test maybe using the field name and the multi field formula. But I think this would create the same problem as A only now just by adding an absurd number of columns. I'm still not totally clear if Alteryx prefers to be abused in terms of record count or column count so I'm not sure if there's any advantage to this over A.
C) I think i could implement A with a batch macro and keep the record a little more tidy, but I think I still run the risk of creating something that could be unwieldy.
Are there any other methods that I'm thinking of?