Breaking news

– In which Dorn turns breaking things into a virtue.

O

One of my duties at my last job was to break things. When a new computer program, website, or procedure was created by the office, I would dive into it and test it to the breaking point if I could. I followed the instructions as laid out, but tried think of ways they could be mis­interpreted or mis-applied, resulting in the failure of the whole system. The theory was, of course, that it was better to find and fix these weak points while the product was in house than after it was made public.

I liked my job of breaking things, partly because it was like puzzle-solving, and partly because I had a knack for it. I understood the feelings of Nick Naylor, the “hero” of the 2005 dark comedy Thank You For Smoking. He’s a lobbyist for Big Tobacco, and when asked by a lung cancer victim how he can work at such a job and even enjoy it, he says the job gives him satisfaction because he is so good at it. (This is a very funny movie, and I recommend it if you missed it the first time around, unless you are offended by slurs cast on the health virtues of Vermont cheddar cheese. It’ll be on HBO over the next week or two.)

Of course I’m not suggesting that there’s any moral equivalency between trying to break things that people have worked to make unbreakable, and trying to get people addicted to a nasty life-shortening habit for money. The latter is odious, while the former is perfectly respectable. As Phil Johnson points out in his essay “Failure is just data“, testing out a new product is like a scientist testing a scientific hypothesis. If the product fails to perform as expected, it just means that the hypothesis (that the product is ready for rollout) is disproved.

“If this were to happen to a scientist,” Johnson explains,”the reaction would be that they are doing their job well, as long as they capture the data about why the hypothesis was wrong. It’s not out of the ordinary, it’s expected and necessary. The key is framing a failure as an informative versus negative outcome.” (Johnson may have a few blind spots himself about the mental and emotional makeup of scientists.)

His point is that there’s an emotional bias against experiencing a failure which is frequently misplaced—it often should be welcomed as a stepping stone towards ultimate success. This emotional bias is a form of confirmation bias (my favorite bias), which causes us to seek out information that confirms what we already believe, or what we want to believe.

This bias is present even when we don’t have pride of ownership in a product we are testing, or any objective stake at all in the success or failure of the process. In his book The Righteous Mind, Jonathan Haidt describes an experiment performed by the originator of the term “confirmation bias”, where the process for determining a simple sequence of numbers is investigated:

In 1960, Peter Wason published his report on the “2–4–6 problem.” He showed people a series of three numbers and told them that the triplet conforms to a rule. They had to guess the rule by generating other triplets and then asking the experimenter whether the new triplet conformed to the rule. Suppose a subject first sees 2–4–6. The subject then generates a triplet in response: “4–6–8?” “Yes,” says the experimenter. “How about 120–122–124?” “Yes.”

It seemed obvious to most people that the rule was consecutive even numbers. But the experimenter told them this was wrong, so they tested out other rules: “3–5–7?” “Yes.” “What about 35–37–39?” “Yes.” “OK, so the rule must be any series of numbers that rises by two?” “No.” People had little trouble generating new hypotheses about the rule, sometimes quite complex ones. But what they hardly ever did was to test their hypotheses by offering triplets that did not conform to their hypothesis. For example, proposing 2–4–5 (yes) and 2–4–3 (no) would have helped people zero in on the actual rule: any series of ascending numbers.

Wason called this phenomenon the confirmation bias, the tendency to seek out and interpret new evidence in ways that confirm what you already think. People are quite good at challenging statements made by other people, but if it’s your belief, then it’s your possession—your child, almost—and you want to protect it, not challenge it and risk losing it.

My knack for breaking products and processes at work might have been related to my ability to suspend any pride of ownership (which of course is easier if it’s not my own product that I’m testing).

Now that I’m retired, though, I find I don’t have as many chances to practice this skill—there just aren’t that many things around that need me to test them to the breaking point, unless I want to failure-test my own work (which I’m not confident I could do objectively), or to tell Kathleen all the ways that the thing she is doing isn’t working. And that way madness lies!

Happy Halloween!

Thanks,
Dorn
10/31/2019

Optimist prime

– In which Dorn loses THE argument with Kathleen.

O

One of the things I like to feel smug about is my enlightened skeptical view toward my own beliefs. I have even started accumulating notes for a blog post on healthy self-doubt. I’ve already got a cool quote to use by Oliver Wendall Holmes, “Certitude is not the test of certainty. We have been cocksure of many things that were not true” (from a Wash Post review of Oliver Wendell Holmes: A Life in War, Law, and Ideas by Stephen Budiansky). I’m debating whether to include in that blog post the concept of confirmation bias, where we reinforce our own beliefs by hungrily ingesting supporting evidence, but ignoring, to the extent we can, any contrary evidence.

I’m also an optimist. Some might say smugly so, certainly intentionally so. I work hard at it. Many’s the time when Kathleen and I have debated philosophical points that she’s said, “You’re such an optimist!”. “No, you’re just a pessimist!”, I might reply. “I’m not a pessimist, I’m a realist.” “No, I’m the realist.” “Are not” “Am too” “Am not” “Are too” and so forth.

My smugnitude was tested recently. I was poking a stick into the internets to see what I might pry out, and I found a scholarly review article, “Costs and benefits of realism and optimism” (Curr Opin Psychiatry. 2015 Mar; 28(2): 194–198.) In it, I found that “unrealistic optimism”, also referred to as “optimism bias”, is “a robust phenomenon across a variety of tasks and domains” that is accepted widely enough to be the topic of multiple papers in psychiatry and philosophy. Uh-oh, I don’t like where this is going!

Apparently, the question of whether unrealistic optimism exists has long been settled (“yes”), and now thinkers are pondering whether it actually does any good. There is a theory, which the paper didn’t really embrace, that unrealistic optimism, while making one’s view of the world and his or her place in it less accurate, nevertheless conveys some sort of benefit to the optimist.

The notion of “benefit” was picked apart. Does unrealistic optimism make you feel better, psychologically or maybe even biologically? Or does it make your situation (in society, for example) objectively better? Philosopher types talked of “epistemic” benefits, which as near as I can understand means it gets you closer to evidence-based truth. 

It is well-known in clinical circles that people experiencing depression tend to have a more realistic understanding of some situations, such as their own present and probable future well-being, than people, including optimists, without the condition. Most people, and especially optimists, apparently underestimate with alarming predictability the chances that something bad will ever happen to them.

This put me in a real spot. Do I stand up against confirmation bias, and accept that I have optimism bias? Or do I give in to it, and continue to tell myself that my optimism is real realism, and just ignore any evidence to the contrary?

Well, I did what any thinker would do in such a situation: I scoured (well, I browsed) the internet for more evidence that supported the conclusion that I wanted to believe. Okay, so “unrealistic optimism” is a thing. It will take me a while to un-learn that, but maybe it’s counterbalanced by “unrealistic pessimism”? If I can’t win my philosophical debate with Kathleen, maybe I can at least tie?

Turns out there’s a lot less discussion of unrealistic pessimism out there. It exists, apparently, but only in extreme situations. I found the abstract to an article, “Unrealistic Pessimism”, from the Journal of Social Psychology, Jul 1, 2010: 511-516. Here’s the abstract in its damning entirety:

Various data suggest that individuals tend to be unrealistically optimistic about the future. People believe that negative events are less likely to happen to them than to others. The present study examined if the optimistic bias could be demonstrated if a threat is not (as it has been researched up to the present) potential, incidental, and familiar, but real, common, and unfamiliar. The present research was conducted after the explosion at the atomic power station in Chernobyl, and it was concerned with the perception of threat to one’s own and to others’ health due to consequences of radiation. The female subjects believed that their own chance of experiencing such health problems were better than the chances of others. Thus, in these specific conditions, unrealistic optimism was not only reduced but the reverse effect was obtained: unrealistic pessimism.

So it would take a Chernobyl-scale event for me to even score a draw in the philosophical debate with Kathleen I mentioned earlier. I’m sunk. The only thing I can think of to do is to drop my smug superiority of my mastery over confirmation bias, ignore the facts and try to retain some shred of my optimism bias so I don’t get trounced too badly by Kathleen. And I’ll either fail, and be able once again to feel smug about my optimism, or I’ll succeed, and be able to feel smug about conquering my former smugness about my optimism. It’s a win-win! (It’s working already!).

Here’s a funny comic about confirmation bias from a funny online strip, Wondermark.

Thanks,
Dorn
9/22/2019

PS. On reviewing this post, Kathleen points out that her arguments wouldn’t seem so pessimistic if she didn’t have to spend so much time injecting reality into my optimism. How can I answer that, now that Science has confirmed it?

Here’s a joke from Kathleen:

Socrates about to drink the hemlock, saying 'Is this glass half empty or half full?'