"The idea of a plane outside the world on which to stand has become a fundamental myth of our culture. The myth has most often taken the form of a spectator view of knowledge – the notion that we can stand aside from the action and comment upon it from a detached viewpoint."
Bruce Gregory - Inventing Reality
Yesterday afternoon, I attended an excellent meeting in London, at which Roffey Park’s Liz Finney and Carol Jefkins shared the output of their research into the evaluation of OD interventions (copies can be purchased here). Their work provides a thorough review of current thinking and practice in this aspect of organisation development. The research is presented in an easily digestible form. And the authors offer practitioners a "toolkit" of potential interventions to evaluate the conduct and impact of their work.
But is their prescription credible in the complex world of organizational dynamics?
As a result of their research, Finney and Jefkins declare themselves to be committed advocates of evaluation as a necessary part of OD practice. They call on practitioners to leave behind what they describe as the "unproductive philosophical debate" between objectivism (quantitative, verifiable, measurable …) and subjectivism (interpretive, constructed, emergent …). Instead, they call for a "pragmatic ‘third way’ combining the strengths of subjectivist and objectivist positions, using both qualitative and quantitative approaches to produce solutions that are greater than the sum of their parts". This, they argue, provides a practical way forward for external consultants and in-house specialists alike.
Although acknowledging in the report that "… there are many conceptual, practical and political obstacles to evaluating OD," they nevertheless go on to say, "The conversations we had with the expert practitioners left us with a sense of cautious optimism and a desire to be constructive and pragmatic."
This is a worthy aim. It would be shared by most of the OD practitioners who were present at yesterday’s meeting. However, buried within it is the implication that those who fail to accept the ensuing line of argument are being unconstructive (or, worse still, destructive) and/or dogmatic in their stance. On the contrary, I would argue that surfacing flaws in an argument or its underlying assumptions is eminently constructive. Indeed, it’s a core aspect of OD consultancy! Also, if pragmatism is not to degenerate into expediency, it is essential that decisions and actions are grounded in firm principles rather than popular myth.
So why do I feel that the ‘just do it’ approach advocated by Finney and Jefkins is mistaken?
The implications of complexity
Following their largely fruitless search to track down prior research and writings on OD evaluation, the authors decided to interview a range of so-called "experts" in the OD field. I say "so-called" solely because I was one of those interviewed in the spring of 2009. Questions were designed to elicit our perceptions of the extent, nature and impact of evaluation in current OD practice. My responses inevitably reflected an informal coalitions perspective of organizational dynamics, which sees organizations as complex social processes of people interacting together – or as dynamic networks of self-organizing conversations, as I describe them in the book.
As with all interviewees, a number of my observations found their way into the final report. Some of these are implicit in statements that the authors make in the main text, based on their sifting and aggregation of comments from a range of respondents. Others are included verbatim. The latter extracts are used to illustrate major themes that emerged from the conversations as a whole; or, as in my case, responses that ran counter to the majority of viewpoints. Having re-read the report, I am comfortable that the three verbatim comments of mine, extracted from the interview transcript, still provide a fair reflection of my stance on the subject. I’ve set these out below, to underscore the point that I am making.
The first extract sets out the fundamental premise on which I would challenge the ability of anyone to link particular outcomes to specific actions – whether these relate to OD interventions or any other in which outcomes are contingent on the thoughts, feelings and actions of people:
"I look at organisations ... from the point of view of organisational dynamics ... So, for me, organisations work by people interacting continuously with each other across – and beyond – the organisation. Through formal and informal conversations, they make sense of what’s going on and decide how they’re going to act. And it’s through the interplay of all those interactions that outcomes emerge. From this perspective, it’s impossible to say if and how a particular formal intervention created a particular outcome, which is the essence of evaluation ... you can’t make that link. Cause and effect are not related in a simple, linear fashion. It’s not a clear link ... and correlation doesn’t mean causation. You can’t tie specific actions to the outcomes that you’re going to get, because there are too many things in play. You can’t put the organisation in a test tube and do experiments on it. Outcomes are emerging through this constant interplay of people in day-to-day interaction."
This inability to predict, control and evaluate outcomes in organizations is a natural – and unavoidable - dynamic of people in interaction. It is not something that only happens now and again. Or only in some (assumed to be poorly managed) organizations and not in others. It happens all of the time; in all organizations. And yet conventional management wisdom assumes the opposite. Pointing to this disconnect between mainstream thinking and the complex dynamics of everyday organizational life is not a counsel of despair. On the contrary, it offers a clear pointer to what I see as one of the central tasks of organizational consultancy, whether provided in-house or externally. That is (as suggested by the second ‘quotation’):
"To help [clients] understand these dynamics better and help them to navigate in these messier waters ... not colluding with the view that it’s all measurable ... and it’s all controllable ... and it’s all predictable."
The last of the three extracts relates to the increasing ease with which technology can facilitate the collection and analysis of data. Far from improving the situation as regards evaluation, I argued that:
"Increased technology has enabled a flawed concept to look more credible still because it’s easier to create numbers."
Understanding how organizations work is fundamental
So, despite the thoroughness and professionalism of the Roffey team’s research project, and their intention to make their report accessible and useful to practitioners, I don’t accept the proposition that the philosophical debate is sterile. Nor do I agree with the conclusion that is assumed to flow naturally from such a position – that is, that practitioners should just get on with the measurement task, using whatever tools they can to achieve this. I see this as another example of the ‘do it better and get it right’ response to past failure that is characteristic of what Ralph Stacey calls "the dominant management discourse".
The debate about how organizations work and what the implications of this are for management and OD practice is not a sideshow or unnecessary distraction. It is fundamental. In a world in which managers have been led to expect certainty, predictability and control, it might be uncomfortable to accept the challenges presented by the social complexity (i.e. inherent messiness) of everyday organizational life. But wishing that things were otherwise won’t make these dynamics go away!
Evaluation as an ongoing dynamic of 'open play' rather than a periodic, 'set piece' event
From this perspective, the ability to "… stand aside from the action and comment upon it from a detached viewpoint" - as implied by formal, ‘set-piece’ evaluation - makes no sense at all. Instead, there is a recognition that evaluation is central to the ongoing sense making and action taking of everyday organizational life. So it’s here, in the 'open play' of day-to-day interaction, that the perceived value of OD interventions (and other formal ‘designs’) is continuously assayed and their intended benefits realized – or not.