In my recent paper, Taking Organisational Complexity Seriously, I criticised Sir Robert Francis’s recommendations arising from his inquiry into the Mid-Staffordshire NHS Trust. I argued that these were too focused on “the system” and not enough on the everyday, local interactions between practitioners, patients and others through which outcomes emerge in practice. I suggested, that a better response might have been,
“… to encourage, assist and enable those involved to explore their current experience of their individual and collective practice: the situational specifics and taken-for-granted patterns of thought, feeling and behaviour that are organising that practice and which might tend to undermine the dedication and commitment that has long come to be associated with ‘the caring professions’”.
Thankfully, the subsequent review of practices in a number of other NHS Trusts by Sir Bruce Keogh seems to have adopted just such an approach – at least for the most part. Most importantly, his recommendations appear to be focused on the specific needs of those staff in each hospital and Trust whose job it is to deliver the necessary quality of “clinical effectiveness, patient experience, and safety”.
Although the shameful bickering and thinly disguised point-scoring that took place in the House of Commons in the immediate wake of the report’s publication might have suggested otherwise, the language in Keogh’s report is measured. And, pleasingly, his grasp of the complex nature of the dynamics involved in ensuring high-quality care is laid bare in his report:
“… our analysis of these 14 hospitals proves that understanding mortality (and concepts such as excess and avoidable deaths) is much more complex than studying a single hospital-level indicator. There are many different causes of high mortality and no ‘magic bullet’ for preventing it.”
In his covering letter to the Secretary of State for Health, Jeremy Hunt, he further seeks to debunk the idea that such abstract measures as the Summary Hospital-level Mortality Indicator (SHMI) or the Hospital Standardised Mortality Ratio (HSMR) can be used to quantify actual numbers of “avoidable deaths” or to relate these to inadequate levels of care:
“However tempting it may be, it is clinically meaningless and academically reckless to use such statistical measures to quantify actual numbers of avoidable deaths”.
Despite this, of course, this is precisely the use to which such measures have been put by politicians and media commentators alike!
So, at first sight, Keogh’s review offers the hope that more recognition might be given to the complex social reality of everyday hospital life through which care is ultimately provided. At the same time, though, he fails to challenge the need for the plethora of organisations that surround the provision of health-care services ‘on the ground’. In doing so, I feel that more emphasis is still likely to be placed on the checking-up of practitioners’ compliance with generalized rules and standards imposed by these 'arms-length' bodies than on helping them to deal with the specific realities that they face in the midst of ‘live’ practice.
In his recommendations, Keogh:
- emphasizes that, whilst some common themes could usefully be addressed across a range of Trusts, managers and staff in each institution face “a unique set of problems”;
- stresses the importance of focusing on the contributions of frontline clinicians, whose “constant interaction with patients and their natural innovative tendencies means they are likely to be the best champions for patients and their energy must be tapped not sapped”
- maintains that transparency should primarily be used to provide support and foster improvement, whereas it is more often than not currently used to hold someone to account (which he brackets here with the idea of finding someone to blame);
- argues in support of peer review as a route to improved performance and strongly advocates the active networking of knowledge and ideas between hospitals and clinicians, to improve “access to the latest clinical, academic and management thinking”;
- firmly believes that a sense of ambition can be fostered “if we give staff the confidence to achieve excellence”; and
- sees junior doctors as the “most powerful agents for change”, as they move between hospitals, and student nurses as “ambassadors for their hospital and for promoting innovative nursing practice” as they move between wards.
So what would I see as the more disappointing features of Keogh’s recommendations, as viewed from an informal coalitions/complex social process perspective?
First, although Keogh reports that involving patients and staff was the single most powerful aspect of the review process, it’s unfortunate that he slips into the use of managerialist language (advocating the need for “real-time feedback”, “customer service”, and “engagement”, for example) rather than arguing, perhaps, that staff might be encourgaed, assisted and enabled to listen to, talk with and respond to patients and their relatives as a matter of everyday routine.
Secondly, it is also disappointing that the taken-for-granted assumption appears to be that it is the conduct of the inspection regime that is critical to the delivery of the sought-after clinical effectiveness, patient experience, and safety. For example, he advocates that the inclusive nature of his investigations (involving clinical staff, patients, managers, and the like) should be adopted as routine by those charged with monitoring quality. But, however well the inspections and recommended “listening events” might be carried out in future, such ‘set-piece’ occasions only take place on a tiny proportion of the patient-days during which staff and patients are (actually or potentially) interacting. It is during these ongoing interactions that quality care and clinical excellence is delivered (or not). And it’s here that some of the more inclusive and participative practices advocated by Keogh might add most value.
Thirdly, the report highlights the debilitating effect that coupling the desire for transparency with the search for someone to blame can have on the ultimate goal of improvement. However, Keogh links accountability and blame together, almost as synonyms, and contrasts these to what he sees as the more desirable focus on support and improvement. I think that an opportunity was lost here to help to reframe people’s understanding of “accountability” as the ability of practitioners to account for what they are doing and why they are doing it as part of their everyday practice. Conversations with peers and more senior practitioners along such lines would make an important contribution to the ongoing improvement of their own and others' performance. Instead, he has left it as a label for people to hang around the necks of ‘the guilty’ when things appear to have ‘gone wrong’.
In this sense, I think there is also some naivety in failing to recognize impact of media hype and party-political spin on people's interpretation of the raw data. Rather than enabling "openness and transparency" where it matters most - on the ground, in the midst of ongoing practice - failure to acknowledge the socio-political nature of life in organizations means that people are still unlikely to admit to mistakes – however innocently these might have come about. A clue to this is contained in Keogh’s own commentary on the ‘lessons learnt’ from the make-up of the focus groups used in his review:
“… the presence of more senior staff in these groups, or mixed-grade groups made staff in lower grades less willing to speak out, or to be truly honest.”
Organizational life is unavoidably political. And these dynamics are ‘ratcheted up’ whenever “transparency” (i.e. full disclosure) is primarily used as a ‘stick to beat people up’.
Fourthly, Keogh points to the somewhat sinister sounding Quality Surveillance Groups as “the place that data and soft intelligence comes together”, to allow them to support the Care Quality Commission by “identifying areas of greatest risk”. Here again, the sense is one of local practitioners being ‘put on the back foot’ by better informed inspectors and regulators; rather than them being provided with data and ‘soft intelligence’ that might enable them to self-manage their affairs more effectively. Keogh does acknowledge that local provider Boards might usefully choose to incorporate some of his research methodology into their local practices (data-informed, participative, unconstrained by “the limitations of a rigid set of tick box criteria”, and so on) but the primary focus is on the multitude of non-practitioner groups who surround the local curers and carers.
Finally, despite the recognition that Keogh gives to the contingencies of local practice and to the diverse needs and circumstances of local practitioners, the presumed ability to compare, contrast and manage hospitals on an NHS-wide basis is still writ large. Here as elsewhere,
it is vitally important to ensure that the ‘scorekeepers and commentators’ are not seen to be more important than the players!