Calling all Racers for the Alteryx Grand Prix! It's time to rev your engines and race to the stage at Inspire! Sign up here.

General Discussions

Discuss any topics that are not product-specific here.

Advent of Code 2022 Day 10 (BaseA Style)

Alteryx Community Team
Alteryx Community Team
Discussion thread for day 10 of the Advent of Code -
8 - Asteroid

The hardest part was reading part to of the question



13 - Pulsar
13 - Pulsar

Technical and complex description for a comparatively straightforward challenge. This one is possible with only core-level tools.


Getting the part 2 characters aligned and readable was the hardest part for me. The # and . were different sizes. I remembered @AkimasaKajitani's star character in his browse tools. The ★ paired with _._ was much more readable!

Screen Shot 2022-12-09 at 10.53.04 PM.png

16 - Nebula
16 - Nebula

I was going to struggle the AoC today, but it was easier than I expected. But I think that the question text of the Part 2 is difficult to understand.


Inspired by @clmc9601 , I tried to make it easier to read by using Table tool!




8 - Asteroid

Created a gif for the solution for part 2

13 - Pulsar
13 - Pulsar

I thought it was an easy question upon initial read but for some reason took me way longer than expected. My solution is similar to the others, nothing too exciting.




9 - Comet
Day 10.png

Surprising no macros needed for today, the hardest part is understanding part 2 🙄

17 - Castor
17 - Castor

This is not a solution post - it's about the way to think about building software that is reliable; stable; and super quick to deploy with zero UAT.    In other words - Test Driven Development


So - we'll do this together on day 10's challenge.


Build Test Data

First: Start with capturing some test data that reflects our current understanding of the problem










Build a failing test:

Now you want to build a failing test.    This seems like a silly idea - why build a failing test?    Well - what you want is to know whether or not your test process is actually working, not just telling you what you want to hear :-)


So - we take this test data, and put it through an empty shell of code, and then build the piece that checks the results.

At this stage, our solution is an empty shell - so all of our tests should fail.


Build an empty Shell solution that gives the wrong answer




Now build the tester:



Run it to make sure you get a failure:



Where are we now?

If you have done this right - you now have

- Good test data that reflects the solution we WANT

- a good way of testing whether our software does this right

- an empty shell solution that does not work properly


The last 2 steps:

Red->Green: Now that you are confident that you can immediately tell if what you built is working or not - you can build your solution iteratively and quickly, testing as you go along.    You build until your test cases pass and then stop.


Refactor: Finally, when you're done - you then look around at other people's solutions and say "Aaah - that was clever - I wish I'd done it differently"    Also - I often find that when I've finally got a green solution - I hear myself say "Oh - that's what they meant in the requirement" - so I only really understand the problem once I've solved it once in a messy way.    So now that you've solved it once, and have test cases - you can feel confident in refactoring (cleaning up / restructuring) to be the code that you would be proud of / proud to show others / know is stable and well built.    And because you have test cases - you can do this iteratively and at any point you know if your code still works.   In other words - this doesn't have to be a big bang - you can clean up in small sprints - with complete confidence.


Why go to all this trouble?

Several reasons - but in my mind the major 3 are:

- At any time, you know if your solution is ready for production - there's no guess work or human UAT.    You could even schedule this to run automatically like Google does with their test cases, every time you change anything

- If you change HOW you build the solution - you can still be confident that it works according to the required outcome.   This is super important 'cause often we build something which is so complex and so hard to test that we're just afraid to update it or clean it up or do it better when we have better ideas (this cleanup is called "Refactoring")

- If you discover a new scenario that didn't work in your solution - you can just add another test case - and you'll never have to worrry about this particular defect again (i.e. regression testing)



This is the essence of Test driven delivery / also known as Red Green Refactor:

- Red: Build some failing test cases and an empty shell solution

- Green: Build software / solution until the test cases pass

- Refactor: Based on what you now understand about the problem, and ideas from others - you can change the way the solution is built and still be confident that it will still give the right output


It is slower for development in competitive situations like this - but for building software / data flows that need to work reliably in your day-job and in a production context over time (as requirements change and evolve) - this is one of the best work habits to instill in your team.




15 - Aurora
15 - Aurora

Struggled a lot with part 2 as my English wasn't good enough for it. I used deepL to translate the task and gave it a second shot later. Similar to the last couple of days, I'm sharing a "not cleaned up" version that is only annotated, but nothing was removed.





15 - Aurora
15 - Aurora

I decided to try another type of chart in Tableau today on the AoC leaderboard dataset. A highlighted map of the daily ranks per part :-)