Good point, @OldDogNewTricks. The data come out in tabular format from the Model Comparison tool. You could apply a sort and use the top record to dynamically automate which model will be used for scoring.
Also much longer than 10 minutes! I learned a valuable, if frustrating lesson today. Check to default field type!! In the formula tool, I forgot to change the days_of_retention from string to int32. This caused both of my models to evaluate as 100% accurate! To figure it out I had to download one of the solutions and do a join the output of the formula tool with the output of mine on all the fields. This gave me a type mismatch in the join tool.
All in all, a good exercise in debugging, though, so the time wasn't completely wasted.
This took me several hours, mostly to try and understand why the score tool wasn't outputting metadata. None of my downstream tools could remember any of the field names.. apparently it's a bug. I also couldn't work out why the Interactive output of the tools and the compare models tool gave different accuracy results...