The Dunning–Kruger Effect – Who knew?
I was introduced to the “Dunning-Kruger Effect” last night whilst listening to my first edition of a podcast called “This American Life”, as I listened to the story on the way to pick up my son from soccer training.
This episode consisted of three separate stories supporting the notion of “The Defence of Ignorance” – the state in which we deliberately or unconcsiously act as if we are not aware of some information about our context.
It’s kind of amusing to me that this observation was unknown to me, given that I see its effects so frequently. But, now that I have a handle for it, I can explore it in more detail.
Just to recap, the Dunning-Kruger Effect is:
“a cognitive bias in which relatively unskilled persons suffer illusory superiority, mistakenly assessing their ability to be much higher than it really is.” – Wikipedia
The simplest scenario is the basic experimentation that Dunning and Kruger performed on a range of students at Cornell University. In this simple experiment, each participant is given a test on some generic subject, such as logical reasoning or grammar. As each test subject finished, they were also asked to rate their performance in percentile terms in relation to all the other participants in the test. E.g. a ’65’ answer indicates that the subject thinks that their score will beat 65% of all those who undertook the test. The results were quite distinct, and were repeated and verified over and over again by D-K and other researchers: the test subjects who scored in the very low percentiles of test results (e.g. 11-16), gave themselves very high ratings in terms of how they did, around 65 on average. In other words the people getting E’s & F’s on their test honestly believed that they did well enough to get B’s and B+’s.
The explanation for these results is rather simple: when assessing our performance we are calling into use the same memories, skills and knowledge as we used to sit the test. Because we don’t know that much, we don’t know enough to know how little we know, and so we grow confident.
There’s no reason to be dismissive of those suffering the effect. Excluding other factors, we will always encounter a context at some time in which we become subject to this effect. Someday we’ll enter a subject area or context about which we don’t know much of anything, and what we don’t know allows us to grow confident.
It occurred to me that a number of the scenarios that supported the exposition of this “Defence of Ignorance” sounded much like incidents and experiences that I’ve had with many an agile team in the past few years. People who, not understanding much about software development, read some books about agile and begin confidently making arrangements and pontificating about the general order of things in a development project.
It’s not exclusive to agile by any means, and the occurrence of the problem doesn’t in any way reflect badly on agile practices. But it’s interesting that I felt the strongest resonance with examples of bad agile practice.
Why might this occur in agile practice? I suppose the main reason that this effect is so obvious in agile thinking is that it (agile) all seems so simple on the surface. Furthermore, many agile resources are written in parable form, with long stories about folks who were having trouble with their software project but at some point drank from the well of agile knowledge and were able to snatch victory from the jaws of defeat. I often thought that agile protagonists wrote this way because of a lack of suitable case studies from real world situations, but that thought was from some years ago, and one would expect that by now this history gap would have been filled. But perhaps also, fictional scenarios are easier to control, and easier to access, for people trying out new ideas.
The problem of course is that software development is anything but simple. Our newly trained or still innocent apostles are ignorant of that fact and of the underlying difficulties, and so are able to offer a vision of simplicity and effectiveness that is able to persuade both those in a similar state of ignorance but also some folks with far more experience and knowledge, who follow along for reasons of their own.
Of course, I’m not saying this is universal, or even the majority. I’m saying that it occurs and with enough frequency to contribute to projects becoming impaired. I suppose that the mindset of at least some people is that key roles such as agile coaches, scrum-masters and the like need to have skills other than those of the technical or traditional project manager. They need to be excellent with people, knowledgeable of the agile process, and analytically good. But these are necessary qualities, not sufficient to guarantee a good result. What ends up happening is that Dunning-Kruger sufferers can begin arguing, influencing or directing experienced software developers to do things that they otherwise might not do, because “the process tells them so”. I’ve literally seen non-software developers reading out of a Ken Schwaber or Jeff Sutherland book telling the software engineers what to do. It doesn’t work.
What can we do about it? One obvious answer is to ensure that we hire or assign people to these key roles only those who have sufficient experience in hands-on software development projects in addition to solid knowledge of both the general agile landscape and specifically the particular sub-class of methodology that they wish to practice.