This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Like @dsmdavid I'm stuck on part 2. But I have heard of some BaseA solves. Assumed for a while I could continue with the generate rows but given the volume of the cubes is in the billions then creating 400 of these seems much harder than the handful that fall in the part A range of -50 : 50.
Chris Check out my collaboration with fellow ACE Joshua Burkhow at AlterTricks.com
First: prep the data. This has a parameter in for the maximum value so that I can use the same process for part 1 and part 2.
Then take the first cube defined by X1..X2; Y1..Y2; Z1..Z2, and use this as the starting point. From there - the process iterates for each NewCube - Take the next cube and split all the pre-existing cubes based on this cube's edges. So if your existing cube was 1..10 and the newly introduced cube is 5..6; then you split your existing cube into 3 parts: 1..4; 5..6; 7..10. This makes things much easier down the path - Then look for any cubes which occupy the same space as NewCube (because you've done the splits above - this is pretty simple) - drop all the cubes that intersect / overlap with NewCube - if NewCube is ON, then add NewCube to the existing cube list. if NewCube is off then ignore it - we only keep cubes that are on. - get next NewCube.
As I said - this works well for the examples in part 1 and is relatively linear in time since it only increases the number of items in the list as a ratio of the number of new edges that are seen (not the number of points).
Now debugging to get a final good answer for part 2.
I had a simpler setup (or two or three) for part 2 but it was too slow, too much data. I finally got it down to 113M records and it completes in 3.5 minutes on my not so powerful machine.
My idea is to take each cube and find out where it intersects in any direction with any other cube. Then i split the x,y,z intervals on all of these intersections (one container for each direction) and then join it back together. I then had to use a few tools to basically condense this down to as few records as possible before joining x to y to z. It should be dynamic and handle any input.
@patrick_digan - after days of chasing down silly debugging, finally got mine up and running. I remain impressed and intimidated that you got yours done within a day or two of the challenge being published!
OK - here's my final solution for part 1 & 2. This became complex enough that I built test harnesses for the sub-macros so that I could continue to run all my test cases as I built them out. I've attached the solution and the two test harnesses.
The original approach was a simple brute-force explosion - but this had to be completely reengineered to make this work.
The recipe: Take the first cube - this is the starting universe. Then for each successive row: (called New) - Identify which of the existing universe cubes intersect New - If any intersect - then split these up so that the intersection is clean. As an example - if existing is 8-20; and the new is 8-15 - then split the existing cube at the 15 point. - Once you've split up the cubes so that any overlaps with New are simple / clean - then you can remove the existing cubes that overlap with New, and replace them with New - take the next row as New
Top level flow:
The data prep this just transforms and cleans up the raw data; along with setting a limit on the maximum size:
Set up the Macro
This piece creates the initial version of the map file, containing the first row - i.e. set up the initial unverse
The iterative process: This is the process which takes every new row, and goes through the process of: - Split the existing cubes in the universe - Then join any new cubes - Write down the new cubes as part of the universe
The test harness: In order to make sure that the Splitter was doing its job correctly - I also built a test harness that ran this under a dozen or so different test cases.