ML benchmarks and their pitfalls
On marginal efficiency gain in paperclip manufacture
August 16, 2020 — April 13, 2021
Machine learning’s gamified/Goodharted version of the replication crisis is a paper mill, or perhaps paper treadmill. In this system something counts as “results” if it performs on some conventional benchmarks. But how often does that demonstrate real progress and how often is it overfitting to benchmarks?
Oleg Trott on How to sneak up competition leaderboards.
Jörn-Henrik Jacobsen, Robert Geirhos, Claudio Michaelis: Shortcuts: How Neural Networks Love to Cheat.
Sanjeev Arora, Yi Zhang, Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis has a minimum description length approach to model meta-overfitting which i will not summarize except to recommend it for being extremely psychedelic.
1 Explicit connection to Goodhart’s law
Filip Piekniewski on the tendency to select bad target losses for convenience. Measuring Goodhart’s Law at OpenAI.
2 Measuring speed
Lots of algorithms claim to go fast, but that is a complicated claim on modern hardware. Stabilizer attempts to randomise things to give a “fair” comparison.