Throughout this blog, I have sought to challenge conventional management 'wisdom' and the assumptions of rationality, predictability and control on which it is based. Over the past few years, this taken-for-granted view of the nature of the organizational world has led to increasing calls for management practice and consulting interventions to be "evidence-based".
From that perspective, the validity of a proposed new strategy, structure, system, process or whatever is assumed to arise from objective evidence of its successful use in other settings or from generalized principles arising from research. Often, such supposed evidence is codified in the form of, for example:
- case-based 'success stories'
- ‘best practice’ guidelines; and/or
- diagnostic predictors of successful outcomes.
But, as Phil Rosenweig sets out in his book, The Halo Effect: How Managers Let Themselves Be Deceived, there are many common flaws in the arguments put forward to support these claims. He also exposes the pseudo-scientific nature of much of the ‘research’ on which they are based.
However, we are not talking here about products and practices that can be tested meticulously in advance, and replicated precisely in design, development and application. We are talking about the complex social processes that we call organization. And, whilst the dynamics of organization are the same in each case (the self-organized patterning of local, conversational interactions), the ways in which these play out in each situation are unique - and unpredictable in all but the most limited sense.
This means that, from an informal coalitions perspective, it is not possible to link specific interventions to organizational outcomes - either before or after the event. Nor is it possible to carry out ‘experiments’ in limited settings and expect the repeatability and/or scalability of these to be unproblematic. The complex social dynamics of organizational life make the relationships between cause and effect untraceable. And these also place a premium on the unique contextual factors (i.e. interactional dynamics) that are ‘in play’ at any time.
Deciding the validity and efficacy or otherwise of a particular action (whether a formal development initiative or an aspect of everyday practice) is therefore a subjective and interpretive task. That is, it rests on such questions as:
- What is it that we think we are doing? And why do we think that we are doing it?
- Does what we and others are doing seem to make sense - and does it 'feel right' - at this time, in this place, and in these circumstances?
- What evidence of 'success' and 'failure' are we seeing in our actual practice, as the patterns of our actions emerge over time?
- How does what we and others are doing in practice 'stack up against' what we thought we were setting out to do?
- What novel and/or repetitive themes are evident in our ongoing interactions, as we move forward together - opening up new possibilities and/or constraining movement?
- Does what we are doing appear to be useful to us at this time and in this situation?
- And what do we think that all of this means in terms of what we might continue to think and do going forward?
The success (or otherwise) of such interventions will only become evident as 'outcomes' emerge and come to be recognized as such. And, even then, what constitutes "success" or "failure" will be a matter of interpretation and social construction after the event.
In essence, the 'evidence' of the worthwhileness or otherwise of any changed way of working only emerges in the midst of its practice.
In a future post, I will set out what I see as some of the characteristics of so-called "evidence-based practice" in organizations. And I'll seek to contrast these with the attributes of in-the-moment (essentially improvised and conversational) interventions that arise from what I'm calling here "practice-based evidence".
Comments
You can follow this conversation by subscribing to the comment feed for this post.