Friday, October 31, 2014

David Welch meets Goliath Gladwell

Another reader wrote to me, having just finished The Value Crisis:
I have just also read 'David & Goliath' by Malcolm Gladwell. You would enjoy chapter two and its description of the Inverted U. I'm not sure how this might transfer to the world of money - although the book does give one or two limited examples. See what you think.
While I was familiar with Gladwell's interesting take on David and Goliath from a TED talk, I had not yet read his book, so I went to the library to read up.  Sure enough, some interesting ideas emerged.  (I confess that this post might hint at some hefty mathematical concepts, but I'll try to keep it simple.)

To briefly summarize Gladwell's presentation of an Inverted-U Curve, he was describing how one might graph the difficulty of parenting versus the wealth of the parent (see the graph below).  While it is easy to see that parenting is hard for a poor parent and gets easier as wealth increases, the argument is that parenting is also hard for very rich parents.  (This Deseret News article explains.)  So the graph goes up (towards easier parenting) as wealth increases, but then it reaches a peak and curves back down - thus the label "Inverted-U Curve".


This curve is presented to counter our intuitive assumptions about parenting getting easier as wealth increases - it is not the straight line graph that we might expect.  He suggests that the same curve would apply if we are graphing academic achievement against class size.  In other words, more of a good thing is not always a good thing.  This is something I talk about a lot in The Value Crisis.

(I don't know where the $75,000 figure came from for the graph's peak - Gladwell is notorious for letting the science take a backseat to a good story.  I note that other research talks about general happiness leveling off when annual income reaches about $75,000.  Perhaps this is an interesting correlation or perhaps Gladwell simply borrowed the same figure for his graph.)

The interesting thing for me about Inverted-U Curves is that they suggest there is often an optimal value for things.  In most situations this point is obvious, but we more often get tripped up when the horizontal axis is measuring a number-based value system.  In the monetary number-based value system, more wealth is always worth more.  So we assume that the same valuation applies to the qualitative human circumstances such as ease of parenting.  Not so.

The optimal peak value might also represent the concept of "enough".  However, I caution the idea that such values can be empirically determined and then used as human or policy goals.  The reality for most things in life is that they manifest themselves as polarities - unsolvable problems with no single optimum solution.  An example of a polarity in the business world is teamwork versus individual effort.  Neither is the optimum approach to work.  Sometimes we do better with one, sometimes with its opposite.

Here's another Inverted-U Curve, showing the business relationship between incentives for innovation and levels of competition:


With no competition, there is no incentive to innovate.  When competition is high, the cost of innovation can be detrimental.  So a business will be most successful when the level of competition corresponds to the innovation sweet spot, right?  Not quite.  Sometimes, innovation itself is bad for business, and a maintenance of tradition is the preferred strategy.  (Innovation and Tradition are two opposite poles of a polarity.)  Using the same forces at work in the above innovation graph, we can argue that incentives for maintaining tradition will be high when the competition is low and also when the competition is fierce.  There will also be a spot directly below the innovation incentive peak, where it will be most difficult to maintain tradition.

Overall, there is no sweet spot for level of competition.  Sometimes, the level of competition will increase the incentive for innovation, sometimes it will increase the incentive for tradition, and both can be good.

Malcolm Gladwell argues that more of a good thing is not always a good thing, and that sometimes, if we apply the right value system, we will discover optimal states - the concept of "enough".  The Value Crisis argues the same thing, but also suggests that we should be aware of what situations in life are polarities, with no optimal solutions.

The bottom line is that number-based value systems - where more is always worth more (by definition) - are incompatible with the qualitative natural values that are necessary to our happiness and continued existence on this planet.  Decisions that support the maximization of number-based values (such as wealth or sales or economic growth or volume of resources extracted) will eventually go way past the optimal point.

So here's the real question:  Why do we so blindly run our lives and our world in constant pursuit of maximization on the horizontal axis instead of on the vertical one?!

Thursday, October 9, 2014

SMART Goals and Anecdotal Evidence

A reader who works as a teacher wrote to me the other day:
I thought of your book once again this week, in an all day meeting of our school leadership team.  We spent the morning confirming that there is much more to educating young people than just generating marks, and that just focusing on numeric measures was the wrong way to go.  And then we spent the afternoon talking about nothing but numbers.  Stupid numbers.  Like the TDSB Director's vision to decrease the number of students needing Special Ed support by 50%.  (Maybe they just need extra vitamins.)  And stupid comparisons of unrelated data, like the achievement of one group of 14-year-olds vs the achievement of an entirely different group of 14-year-olds a year later.  Actually understanding statistics has become a curse!
The Toronto District School Board (TDSB) comment got me curious, so I looked on the web and found their Years of Action: 2013-2017 report.  This document promises 36 separate actions, resulting in 102 different outcomes - most of which have numeric measurements attached.  Wow.  102 targets that the TDSB can be directly scored on.
Such targets are often referred to as SMART goals - goals that are (S)pecific, (M)easurable, (A)cheivable, (R)elevant, and (T)imelined.  Since The Value Crisis talks about how that which cannot be measured is dismissed in this age of scientific thinking, I started to wonder whether that also means that unmeasurable goals therefore 'lack validity' in modern society.

I'm guessing there are three reasons for making a goal measurable.  The first is so that you can track progress and see if you are doing the right things.  The second is so that you will know when you have achieved your goal and can celebrate accordingly.  The third (more cynical reason) is for promises made in the public sphere, so that you can prove to others that, despite appearances, your target has been reached, according to objective measure.  (This might be where classic manipulations of statistics come into play.)

But what if your goal can't be measured so easily?  What if your aim is to be a better parent or to be happier?  The experts would say that you need some way of measuring this.  I say that's nonsense.  If you didn't need a measurement to identify the need for the goal in the first place, you don't need a measurement to know if you are experiencing success.

(I tried to come up with a new M-word for qualitative SMART goals.  So far, the best that I've come up with is "Motivational":  If you are motivated to continue what you're doing, you are likely moving towards the goal that you desired in the first place.  Please make better suggestions!)

So if you're not measuring your progress by some quantifying means, what can you use for feedback on your goal?  Well, you might go with anecdotal evidence.  <Insert dramatic organ chord.>  Anecdotal evidence!  The mere use of the term immediately raises the hackles on statisticians' necks.  Well, I'm here as the Devil's Advocate to suggest that anecdotal evidence still has plenty of value when drawing general conclusions.  On the other hand, the scientific method, when applied inappropriately, can actually the lower value of information.

I'll illustrate this with an old story (an anecdote!) of three professors traveling on a train through a foreign land.  The historian, looking out the window and seeing a black sheep grazing alone on the hillside, says: "How interesting!  The sheep of this land are black."  The statistician looks out and says: "Well, to be more correct, what we know is that in this part of the country there is one sheep that is black."  The philosopher, without looking up from his book, immediately corrects him: "Actually all we can say for certain is that in this specific area there is a sheep which is black on one side."

Had I been in the next seat, I might have added my own quip: "Excuse me, gentlemen, but I believe the only conclusion you can draw is that, looking out the window of this train, an object was observed that looked like what you know of as a sheep, whose wool (at least what you could see) appeared to be very dark in colour."  The historian's statement could clearly be misleading.  The statistician tried to make a more accurate (and thus, presumably, more valuable) statement.  However, further refinements reduced what began as a simple observation to a meaningless statement, of no practical value to anyone.  So where does anecdotal evidence cross the line from being a scientific menace to being meaningful and useful information?

As a scientist, I am fully aware of the abuse heaped upon anecdotal evidence - and rightly so, when it is used to 'prove' things that can only be proved by more rigorous analysis.  On the flip side, one might argue that almost everything we know, as individuals, comes from anecdotal evidence.  We know that last year's vacation was enjoyable or that last night's movie was boring because that was how we experienced them.  We know that flying insects with yellow-and-black-striped bodies can inflict pain, either through personal history or from the anecdotes of others.  That might not be 100% true every time, but it is certainly very useful information.  Indeed, at one time, the entire store of mankind's knowledge was anecdotal and communicated through story-telling.  You may say that it was naive and imperfect, but if I were stranded in the wilderness, I would take the knowledge of our Stone Age ancestors over that of your average physicist any day of the week.

The very things that make anecdotal evidence suspect - such as our innate propensity to focus on exceptional events (confirmation bias) can have their own value.  For example, one adult, telling a child that they are clever enough to do well in school, can alter that child's self-esteem, willingness to work hard, and ultimately improve their academic performance, even if many other adults are providing only discouragement.  (I heard about that happening once, so it must be true. ;-)  Even medical researchers have to concede the validity of the placebo effect, in which measurable outcomes can be altered by our own beliefs in anecdotal evidence.

So where am I going with all this?

Our brains are wired to make use of anecdotal evidence, and while we sometimes get it wrong, I'll wager that it is (more often than not) useful and valuable information.  A single anecdote, such as being out of breath after climbing some stairs, can suggest to us that perhaps we are out of shape and need to exercise more.  If a few more pieces of breathless anecdotal evidence confirm this, we might set a goal to improve our physical fitness.  If we start being more active because of this and find that we can now take the same set of stairs two steps at a time with ease, then we there is no need for science to measure whether or not we have improved our muscle tone or cardiac output.  It is sufficient and rewarding enough for us to feel we have succeeded in what we set out to do and be happier for the effort.

Similarly, in life, if it is our goal to be happier (and who wouldn't want that?), then we don't need measurable 'SMART' goals to tell us if we are doing the right things.  As for my teacher friend, while you might want quantifiable studies to alter policies for the entire board, that does not change the fact that in a specific classroom, for a specific child, anecdotal evidence is the best proof available that what you are doing is working, or not.  So why does number-based policy so often trump anecdotal evidence for individual circumstances?

P.S.  If classroom marks were the truest indicator of value and potential, I wouldn't be here.