Longtime readers may recall that I’ve spent portions of the past year promising to write about the 2019 Sloan Sports Analytics conference, an event I left with a notebook full of amusing notes and observations. I must admit, even by TOA’s slow standards it’s not moving very fast – the 2020 edition of the conference has come and gone (and just in time, given corona, and perhaps lucky to have its local reputation intact, given Biogen).
One of the highlights of last year’s event was a panel discussion that produced this memorable thought from conference co-chair Darryl Morey – I kind of reject most studies, offhand (not the exact quote, but I think it captures the spirit.) At the time, I greatly enjoyed the remark, but a year later I think it was a poorly advised thought. At the very least, given that many of his invitees had devoted their entire careers to organizing, conducting, and analyzing studies, there was probably a qualifier or two that Morey could have used to take out the sting.
As I was recently wandering around the deserted paths of a socially distant Beacon Hill, I had a revelation about this incident – ‘studies’ implies one kind of thing, but in reality there are many kinds of studies, and it’s probably inappropriate to lump them all under one umbrella. Full disclosure, I would say that my view tends to agree almost entirely with Morey – give me the events over the conclusions, any day of the week, and I'll figure it out (which I believe is the spirit behind Morey's thought). But dwelling on the idea of rejecting all studies ‘offhand’ indicated that in comparison to Morey I was likely closer to the center, even if by just the tiniest margin, in terms of how I thought about studies and their conclusions.
So, when would I stop and think before rejecting a study offhand? If the study was constructed on empirical observations, I’m all in. I perceive such studies as carefully structured, researched, and analyzed observations of real behavior or activity. The neatest example I could think of was the American Cancer Society study that established the first causal link between smoking and increased mortality, primarily due to lung cancer. If President Kennedy were one to dismiss studies offhand, it would have cost many Americans valuable years in terms of reduced life expectancy.
The studies that always put my guard up are those premised on grand experiments. I won’t analyze every little reason why these findings are often debunked or even reversed years later (though as an example I will link this article that mentions challenges regarding the Stanford prison experiment). It just seems that for so many reasons – a desire for results, poor experimental design, statistical error – these kinds of studies have established a certain track record that has created many jaded observers (me) who protect themselves from being fooled again by simply dismissing all findings offhand.
The biggest difference I could find in these two forms of study is the observer effect. When someone presents study results based on observing ongoing natural behavior, to me that feels like a very different thing than when an experimenter records the responses to a series of carefully constructed cues. In the latter case, the subject is aware of the observer, and that’s a relevant thing (proven, I'm sure, by a study). I believe that researchers put in all the effort they can to account for this effect when they conduct their work, but I suspect the effect is too real and too complex to account for with careful design. At best, a carefully constructed experiment can point the way to the follow up question, but I’m going to remain skeptical of any such result that claims to have the answer.