In ‘Leftover #2’ for this post, I talked a little more about comparisons and how relying on these often leads to ‘bizarre forms of assessment’. As I wrote that post, I was reminded of a story about former Dallas Cowboys running back Emmitt Smith.
When teams were considering Smith as a potential draft choice, many downgraded him because his foot speed was considered a little slow among the running backs available that season. Jimmie Johnson, the coach who eventually chose him in the first round, had a slightly different take. Johnson simply noted that Smith ‘never got caught from behind’ and stopped worrying about Smith’s speed. Smith is, of course, now in the Hall of Fame and widely counted among the great helmet football players of all time.
Of course, this is where the rebuttal comes in. You may be saying it yourself, reader – but isn’t saying a player never gets caught from behind its own form of comparison? What if professional defenders are simply faster than the college players? Why would all the running backs faster than Smith get caught from behind?
True, true, I acknowledge, these are all factually accurate comments. But remember, the comparison Johnson made was derived directly from assessing the question at hand. A coach concerned about foot speed who watches games and determines if the player is fast enough in a game situation gets very different information than a coach who lines everyone up at a track and has them race. The former gets direct task-relevant feedback in the context of the sport while the latter gets indirect task-related feedback that must then be translated into the context of the sport.
The most common valuation systems seem to have two properties – they are aggregated and they are external. These systems generalize a group through a mechanism unrelated to the direct question at hand and tempt decision makers to resort to easily derived comparisons within the system to determine value. The helmet football teams that determine foot speed with a stopwatch aren’t necessarily wrong but I do think by answering the semi-relevant question of – is he the fastest? allows those teams to ignore the more relevant question of – is he fast enough?
The better valuation systems seem to have opposite properties – they are individualized and they are internal. This style of system demands more work to properly determine value. But I think the benefits are fairly straightforward – such systems will account for nuance, variation, and relevance far better than an external system built on aggregates. If the decisions makers then want to use these determinations to demonstrate value in terms of a score, they should do so – but this should be an active step and, perhaps more importantly, the final step.
The overall point is perhaps far simpler than anything I’ve written thus far on the topic – a comparison should be an active, final step. If anything automated like a computer, a scantron, or a stopwatch can do the comparison, then I would recommend throwing out the comparison. If the question of deriving value were so obvious, it wouldn’t require deriving.