Weekly Challenges

Solve the challenge, share your solution and summit the ranks of our Community!

Also available in | Français | Português | Español | 日本語
IDEAS WANTED

Want to get involved? We're always looking for ideas and content for Weekly Challenges.

SUBMIT YOUR IDEA
SOLVED

Challenge #159: April ENcyptanalytics

Kenda
16 - Nebula
16 - Nebula
Spoiler
Why do small things that cause a lot of grief like this always happen to me? I built out my whole workflow and ran it only to find out it failed when passing through the macro. So I redid basically everything only to realize the only reason it was failing was because I created my own Text Input file that I called Field1 when the macro was expecting a field called data. *facepalm

Capture.PNG
jamielaird
14 - Magnetar

Here's my solution

 

Spoiler
I created a dictionary of ~800 UTF-8 characters and performed a random character substitution for half of the letters in the string. With 10 differently encoded versions of the original text it can be cracked easily using the decoder.

Screenshot 2019-06-23 at 17.13.21.pngScreenshot 2019-06-23 at 17.13.52.png
LordNeilLord
15 - Aurora

13

Spoiler
Capture.PNGSample.PNG
RWvanLeeuwen
11 - Bolide

My 1000 record long input:

159 my input.jpg

 

And my workflow to get there:

Spoiler
159.png
SeanAdams
17 - Castor
17 - Castor

Very different kind of challenge - thank you @TerryT 

 

 

Spoiler


Flow.png
Solution was to split into characters; then add a random offset on the char ID; but for 1 in 10 of these set the offset to 0; then use a mod function to bring this back to the 1-160 range.

Added the 155 challenge decoder as a macro to check this.

 

 

TimHuff
7 - Meteor

All the special sauce is in the Python tool.

Spoiler
159.jpg
TimothyManning
8 - Asteroid
Spoiler
159. Data Analysis 2.PNG159. Data Analysis.PNG159. Data Analysis 3.PNG


Cool Challenge!
TonyA
Alteryx Alumni (Retired)

This one was a lot of fun. 

Spoiler
I assumed that the number of rows would be equal to the number of characters in the message. I played a bit with the sampling rate (what percentage of the values to hold at the original value) just to see when the encoding would start to break. For this 70 character example, I started getting random errors at less than 25% sampling. I'm sure this would change based on the number of rows of encoding. 
Karam
8 - Asteroid

Solution attached.

 

Spoiler
Challenge 159.PNG
nivi_s
8 - Asteroid

Challenge #159 Solved!

 

Spoiler
clipboard_image_0.png