The hardest part was reading part to of the question
Technical and complex description for a comparatively straightforward challenge. This one is possible with only core-level tools.
I was going to struggle the AoC today, but it was easier than I expected. But I think that the question text of the Part 2 is difficult to understand.
Created a gif for the solution for part 2
I thought it was an easy question upon initial read but for some reason took me way longer than expected. My solution is similar to the others, nothing too exciting.
Surprising no macros needed for today, the hardest part is understanding part 2 🙄
This is not a solution post - it's about the way to think about building software that is reliable; stable; and super quick to deploy with zero UAT. In other words - Test Driven Development
So - we'll do this together on day 10's challenge.
Build Test Data
First: Start with capturing some test data that reflects our current understanding of the problem
Build a failing test:
Now you want to build a failing test. This seems like a silly idea - why build a failing test? Well - what you want is to know whether or not your test process is actually working, not just telling you what you want to hear :-)
So - we take this test data, and put it through an empty shell of code, and then build the piece that checks the results.
At this stage, our solution is an empty shell - so all of our tests should fail.
Build an empty Shell solution that gives the wrong answer
Now build the tester:
Run it to make sure you get a failure:
Where are we now?
If you have done this right - you now have
- Good test data that reflects the solution we WANT
- a good way of testing whether our software does this right
- an empty shell solution that does not work properly
The last 2 steps:
Red->Green: Now that you are confident that you can immediately tell if what you built is working or not - you can build your solution iteratively and quickly, testing as you go along. You build until your test cases pass and then stop.
Refactor: Finally, when you're done - you then look around at other people's solutions and say "Aaah - that was clever - I wish I'd done it differently" Also - I often find that when I've finally got a green solution - I hear myself say "Oh - that's what they meant in the requirement" - so I only really understand the problem once I've solved it once in a messy way. So now that you've solved it once, and have test cases - you can feel confident in refactoring (cleaning up / restructuring) to be the code that you would be proud of / proud to show others / know is stable and well built. And because you have test cases - you can do this iteratively and at any point you know if your code still works. In other words - this doesn't have to be a big bang - you can clean up in small sprints - with complete confidence.
Why go to all this trouble?
Several reasons - but in my mind the major 3 are:
- At any time, you know if your solution is ready for production - there's no guess work or human UAT. You could even schedule this to run automatically like Google does with their test cases, every time you change anything
- If you change HOW you build the solution - you can still be confident that it works according to the required outcome. This is super important 'cause often we build something which is so complex and so hard to test that we're just afraid to update it or clean it up or do it better when we have better ideas (this cleanup is called "Refactoring")
- If you discover a new scenario that didn't work in your solution - you can just add another test case - and you'll never have to worrry about this particular defect again (i.e. regression testing)
Summary:
This is the essence of Test driven delivery / also known as Red Green Refactor:
- Red: Build some failing test cases and an empty shell solution
- Green: Build software / solution until the test cases pass
- Refactor: Based on what you now understand about the problem, and ideas from others - you can change the way the solution is built and still be confident that it will still give the right output
It is slower for development in competitive situations like this - but for building software / data flows that need to work reliably in your day-job and in a production context over time (as requirements change and evolve) - this is one of the best work habits to instill in your team.
Struggled a lot with part 2 as my English wasn't good enough for it. I used deepL to translate the task and gave it a second shot later. Similar to the last couple of days, I'm sharing a "not cleaned up" version that is only annotated, but nothing was removed.
I decided to try another type of chart in Tableau today on the AoC leaderboard dataset. A highlighted map of the daily ranks per part :-)
Day 10 done ! ✅
That part 2 description was a pain to understand but was easier than expected !
Took a bit to understand the question but most of time was slightly tweaking process to results. Mod function ftw on p2
Hi, my solution attached - a bit of relief for the brain after yesterday's challenge:)
Day 10 finished! Trying to wrap my head around part 2 was bloody awful but otherwise fairly painless today. Opted for spatial to show my letters!
Day 10 done - this was much easier than day 9. Only tip to give to everyone openly - is to use a fixed-width font (I used Courier New) to make it easier to read, otherwise spaces don't line up with # characters in regular old fonts.
Dirty solution below:
Here's the macro:
As you can see I've hardcoded 2 things that could be parameterized - the timings for instructions; and the milestones which are important (40/80/120 est)Other than that it's really just a few formulas and not much else
My solution.
Formulas may be redundant (for my understanding of the problem).
It took a long time to understand what the second part was asking.
only 1 year and 2 days late on this