I recently attended a brief presentation of some extensive academic research into leadership performance. Leadership here referred to the contribution of the organization's ‘top man’ or ‘top woman’. So far as I could judge, the research had been carried out impeccably in terms of established academic rigour. But, in abstracting from the complex reality of organizational life on which the research was supposedly focused, the subsequent conclusions were, to my mind, seriously flawed.
The research sought to draw causal links between the backgrounds of those in the most senior leadership positions and the comparative success of their organizations, as defined in the research. The central proposition was that those whose backgrounds satisfied the particular criterion identified in the research would make the best leaders. Simple.
But none of this took account of the complex social dynamics of organization through which "success" - and indeed "leadership" - emerges in practice.
The basis of the researcher's conclusion was their discovery of a "statistically significant" correlation between one aspect of the leader's background and the chosen measure(s) of business success. That is, based on the data analysis, success occurred more frequently when one 'type' of leader rather than another was in post. However, the presence of one of these factors (either specified leader type or successful performance) was not always accompanied by the other. Success also occurred when other types of leader were in post, although less frequently. And it is impossible to say from the research that the latter caused the former, even when both factors were present.
Despite this, the link between successful performance and the preferred background characteristic was turned in to a linear, ‘if you do this you’ll get that’ relationship. The ‘causal vacuum’ was filled by the construction of a narrative that sought to identify why the defined condition might plausibly lead to success. Since this was presented in the form of a ‘flow diagram’, the necessarily tentative inferences that had been drawn from the data appeared to show clear directionality and causal logic. None of this, though, was a product of the research. Nor - if we take the complexity of organizations seriously - could it ever be.
Taking complexity seriously
In the complex social process of human interaction that we think of as organization, it is not possible to demonstrate links between cause and effect in anything but very limited conditions. In essence, these cease to have any meaning. As does research of the conventional kind. The scientific paradigm, on which such research is based, presumes that generalizable conclusions can be drawn from the analysis of data and applied universally. But this is not the case with organizational dynamics. For hard-pressed managers, though, such research appears to offer proof and certainty - pinpointing a sure-fire route through the real-world complexities that they face on a daily basis.
In a truly scientific context, of course, such research is essential to the progress and wellbeing of humanity. Its purpose is to develop a clear body of evidence to inform future practice in a wide range of situations. Medical and technological advances, for example, owe everything to rigorous scientific research. But organizations, as socially constructed phenomena, are not amenable to scientific analysis and to the presumption of linear causality on which this is founded.
As I’ve argued throughout this blog, organization is a relational phenomenon - a self-organizing, patterning process of human (essentially conversational) interaction. Such sense-making-cum-action-taking interactions are always characterized by differing and potentially conflicting interpretations, intentions, identities, interests and ideologies. And what we come to see as "outcomes" – whether formally designed, informally practised, or unconsciously embodied - emerge from the widespread and indeterminate interplay of these local (i.e. small-group and one-to-one)interactions between idiosyncratic yet interdependent people.
I-opening research
One of the charges that is made when challenging the validity of using conventional, science-based research methodologies in a social context is that, if you reject this as a means of gaining (supposedly) greater awareness and understanding, you are left with nowhere to go. This is not the case at all. The conversations that comprise ‘organization’ are local (as defined earlier). And it’s here, in the specific local context, that practitioners can 'research' their own practice in the midst of their ongoing interactions - drawing out the contextual factors, dominant conversational themes, prevalent behavioural patterns (both characteristic and unexpected), and governing assumptions:
- that are organizing their interactions, and enabling and constraining their practice;
- out of which their local and more widespread outcomes are emerging; and
- which point to potentially beneficial shifts that might be made in their current sense-making-cum-action-taking conversations.
Such ‘practice-based research’ (to echo an earlier post) is likely to ‘open-up’ some of the ‘I’s that I mentioned above. By this I mean that those involved would seek to gain insights from the differing interpretations, intentions, identities, interests, ideologies, idiosyncrasies and interdependencies, etc. that are in play - at that time, in that situation and within those relationships. These everyday politics of organizational life are simultaneously shaping and being shaped by the currency of their ongoing interactions.
In the end, it means recognizing that all that anyone can ever do is to ‘act into’ the future as it is emerging - facilitated by a reflective and reflexive approach to their own individual and collective practice.
_________________
Related posts:
Mystic Megaproject - Prdicting the future with Big Science and Big Data (or not)
You can't put an organization in a test tube - challenging evidence-based practice
From evidence-based practice to practice-based evidence
Comments