Sunday, June 4, 2023

the business bro confirms leech therapy

A friend once described the fundamental challenge in his marketing director role as the problem of "you never know for sure if it worked". He meant this in the specific way that fields such as statistics define causation, which sets a standard for knowing the effect of an intervention on the outcome. His organization ran campaigns in a way that failed to meet the definition, so they never knew the extent to which their results were caused by their decisions, if at all. One particularly helpful example he used referenced advertising during the Olympics. If his firm wished to determine the effect of their campaign in a statistically viable way, they would have first needed to randomly divide their audience into two groups so that one would see ads while the other would not, then they could compare differences in the groups' behavior to determine the effect of the ads.

Some of you may be familiar with this process by other terms, names, or labels - a randomized control trial, for example, or A/B testing. In their respective fields these are the gold standards for determining causation, which in more conversational terms means knowing for sure that one thing did in fact lead to another. It's rare to hear anyone dismiss the power of this approach when discussed in the abstract - clearly, if you had an idea for progressing toward a goal then you would want to know whether your idea worked. But I have noticed lately that a lot of people lose this conviction when it comes time to put the concept into practice. An obvious issue is that implementing this approach may have practical obstacles. One thing my friend pointed out was that no one he worked with would have agreed to omit certain major markets, which meant the randomization necessary for clean results was no longer possible. Yet my feeling remains that in most cases understanding and implementing this style of thinking would do a great deal to make us all better in our work.

One area where I've thought more about this lately comes whenever I get follow up reminders from certain pushy colleagues. These situations often have something to do with an urgent request or an upcoming deadline, like "hey, can I get an update on this?", that sort of thing. I don't find these very useful, often just creating extra work for me in the sense of having to stop working to meet the deadline in order to send a response, but I don't have any hope that things will ever change with these colleagues. My reasoning is along the lines of the above logic. As far as I can tell, they don't have any sort of system for isolating the effect of a follow up on the outcome. Therefore, every time they follow up, there are only two potential results - either everything works out, which strengthens the belief that following up is the right idea, or it doesn't work out, which I guess leaves things open to interpretation regarding the effect of the follow up. My suspicion is that the common interpretation in the latter case is not "I wonder if following up has any effect" but rather "I should have followed up more often". It makes a strange sort of sense because they're missing a critical data point for considering another conclusion - the example where they did no follow up yet everything worked out - but the more fundamental issue is that since they likely fully believe following up is the right thing, they'll never create the conditions for collecting that missing data point which might prove them wrong.

Maybe the right concept to invoke today isn't the randomized control trial or A/B testing but rather confirmation bias, the tendency to misevaluate evidence by assigning greater weight to instances that support rather than refute your theory. The cure for confirmation bias is actually quite simple, at least in explanation - you have to look for evidence that proves you wrong. I think the challenge for most of us is that we default toward seeking proof of being right rather than ruling out the ways we could be wrong. Confirmation bias plays right into this tendency. For my friend, his marketing department never looked for instances where consumer behavior was the same regardless of whether those consumers were exposed to a campaign; my colleagues don't seem capable of tossing a coin to decide whether they should hit send on a follow up email.

Obviously, none of this is going to be solved due to a quickly scribbled TOA post. Part of the issue is that I fear we miss opportunities to learn about confirmation bias, instead learning or emphasizing different lessons from situations where I would argue for the relevance of confirmation bias. Many readers may have learned that doctors once used leeches as a way to cure sick patients, with my understanding being that the medical theory back then suggested leeches could suck out the diseased blood. This anecdote was always taught to me with the air of "oh, look at how far we've come since those days" and it's indeed true that medicine has advanced many centuries past its dependence on leeches. But this style of teaching only improves my ability to win trivia contests while failing to explore the broader lessons that might apply in the present day. The effective teaching style would point out that this is an example of how confirmation bias enabled the advancement of incompetence, then offer me a way to identify confirmation bias for myself so that I don't fall into the trap of repeating someone else's mistakes.

In the leech example the problem was that the doctors, accepting the theory regarding leeches, never realized some patients would have healed regardless of the intervention, which allowed them to view each recovery as further evidence confirming the leech theory. My colleagues, always ready to send a follow up, never know if the recipient would have completed the job regardless. My friend's firm poured endless resources into campaigns that may have had no true effect on consumer behavior. If we are to learn anything from these examples, it's that knowing whether one thing truly leads to another can be a far more complicated question than it might seem at first glance. So what's the right approach? Knowing whether something worked, for sure, is a very high bar, and perhaps it's not always realistic to aspire to such a lofty standard. But it's frighteningly easy to settle, looking strictly for evidence that our first instinct was right. Maybe the best way is to reframe the idea of what it means to be right - not accumulating evidence in our favor but instead ruling out the ways we could be wrong, methodically and consistently, until being right is the only remaining possibility.