This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
i've built a workflow where i need to join two tables, but the join is unusual.
table A has a list of dates, and i need to join table B onto it anywhere the date on table B is somewhere within the past 11 months of the date on each row of table A
i solved it by using the Append tool, and then following it with a filter tool to remove rows where the B_Date falls outside of the range i'm looking for
after i finish the append, i can then summarise the data the way i need it
i can get away with this because my tables are fairly small.. but it's still waaay bigger than it should be.. the append tool warns you if you go over 16 rows.. i'm appending 623,000 rows onto 200 rows... (meaning i wind up with 124,000,000 rows).. that's a manageable number.. But what if i needed to append 623,000 rows onto 44 million rows?
This solution works this time, but there has to be a more computationally efficient way to do this... right?