[Sunday, May 9, 2010 | | 4 comments ]

in a prior entry titled "Operations Research is dead", i had proposed to write a series of entries each touching on a particular part of a pair of papers written by ackoff in the late 70s.


the early entries will focus on his critique of OR as it appears in the paper titled "The Future of Operational Research is Past".

i will begin with what his students called one of "ackoff's fables". it focuses on a review of the work done by a team of highly-educated practitioners of OR. i have reproduced below a slightly edited version of what appears in a section of that paper and interspersed it with my comments:
"... they showed me what they referred to as their "evaluation of the results." Over a reasonably long period of time they had recorded the decisions actually made and implemented by the responsible managers and had fed these decisions into their model and calculated the total related operating costs. They had then compared these costs with those associated with the optimal solution derived from the same model. ... The optimal solutions were consistently better than those of the managers. Using these differences they had estimated an annual saving which they had used successfully in convincing management to adopt their model and optimizing procedure."
so far, this sounds like a text-book approach to justifying the cost incurred in implementing an OR model.
"... The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is. Therefore, in testing a model and evaluating solutions derived from it, the model itself should never be used to determine the relevant comparative performance measures. The only thing demonstrated by so doing is that the minimum or maximum of a function is lower or higher than a non-minimum or non-maximum."
ackoff starts by making a point that many practitioners tend to lose sight: in order to justify their work, the research team ought to make their case by checking if that was indeed the best solution in the real world, not just in the fictitious world of the model. quite often, because of the shortcomings of the model, the solution is found to be infeasible. because of approximations in the measure of goodness used, the solution, even if feasible, is the best with respect to a "wrong" objective function.

nevertheless -- and this is where i find ackoff's judgment a trifle harsh -- the optimal solution to an imperfect model is often pretty close to a good solution in the real world. bridging this gap is a tricky problem that is rarely addressed well by the system; users who see the potential of a good starting point can often work wonders with it. one must presume that the team that was under ackoff's microscope had missed this subtle but all too important point. this is apparent from what he says next:
"All models are simplifications of reality. ... Therefore, it is critical to determine how well models represent reality. In this project the team had not done so."
what makes this matter even more curious is what he says next:
"... The researchers went on to tell me how the managers had invariably modified the optimal solution to take into account factors that were not taken into account in the model. Furthermore, the team confessed that it had not carried out any analysis of the nature of the adjustments made by managers or their effects. When I pressed for an explanation of this oversight, I was told that the nature of the factors considered by the managers precluded their inclusion in a mathematical model. Voilá again!"
one can't help asking: weren't these researchers even mildly curious?

this is another recurring problem with the application of optimization in the real world. inebriated by the power of their models and algorithms, OR professionals dismiss the accumulated knowledge of their intended users. instead, they go into head-to-head competition with them. they fail to recognize that their models are imperfect. but even imperfect models can be useful but not if they disregard what the users of those models know over and above what the models do.

after this, the story plays out as you would expect it to:
"Then came the climactic revelation. After about six months the managers had discontinued use of the model because of a significant change in the environment of the system. This change was political in character. When I asked why they had not tried to incorporate the relevant politics into their model, I was told that such changes are neither quantifiable nor predictable.

It should be apparent by now that if the researchers had, in fact, solved a problem, it was not the problem that the managers had."
i believe that ackoff's fable is extremely valuable but not as a nail in the coffin of OR. this is more a cautionary tale for all practitioners, a tale that ought to teach us to be humble in the face of knowledge that our models do not, and quite often cannot, comprehend.

in subsequent entries, i will touch upon some of the more serious attacks on OR that ackoff launches.