I recently came across a LinkedIn blogpost by Tim Rogers entitled, Who should we insist wear Health Tech? In it, he advocates the use of various wearable technologies as ways of improving individuals’ health, and then speculates about the potential benefits that these could bring to wider society. On the face of it, it all sounds reasonable and straightforward.
However, I find the use of the verb "insist" in the post’s title revealing. As often seems to be the case, there is also a "we" doing the insisting of some unnamed others. So power and politics are very much in play here (as they always will be, of course) even though these underlying dynamics of human interaction will rarely if ever be acknowledged when the use of such technologies is advocated. The same could be said about the morality of the whys, whats, and hows of their deployment. In a similar vein, the word “opportunity” usually features prominently – as it does here. But one person’s opportunity is another person’s threat, as they (almost) say.
What concerns me most is that proponents seem to take it for granted that all such developments are unquestionably good - invented by 'good' people, with the intention of doing 'good' things, for the 'good' of other individuals, and thereby enhancing the common 'good'. The real-world dynamics of human interaction are conveniently ignored, in favour of a utopian view in which there is universal wellbeing, co-operation and harmony. A similar pattern can be found in relation to a lot of the stuff that is advocated as a way of ‘dealing with’ complexity.
This "And they all lived happily ever after" naivety undermines serious consideration of emerging technologies – both in terms of the genuine potential that these might have for improving the human condition, as well as the risks that might be involved in their deployment.
As an example, I listened to Sandy Pentland's piece on sociometrics ("Honest Signals"), mentioned by one of the commentators on the post. Towards the end, Pentland talks about his goal of developing “new media that are socially aware”; media that will enable people not only to know what you are saying but also what you are feeling about what's being said. There is no question in his mind that this will “raise productivity… reduce conflict… and lead to a much more pleasant world”. But will it?
The same could no doubt have been said about the removal of internal borders and unrestricted movement of people within the bulk of the EU, when this was initiated by the Schengen Agreement. My guess is that not many of the signatories to that document would use such terms now in the midst of the humanitarian crisis that is dominating today’s headlines. Modifying an agreement might be relatively straightforward, of course. Getting rid of technology-enabled capabilities that serve the interests and/or ideologies of some groups in the population particularly well is not so doable, however - even where (and in some cases especially where) others suffer as a result.
The techno-human condition
A year or so ago, I had an interesting chat with Dan Sarewitz, a professor from Arizona State University. He is co-author (with his fellow professor, Braden Allenby) of a book entitled The Techno-Human Condition. In it, they seek to expose some of the flawed thinking around the deployment of new technologies in a social context and offer a response which is more congruent with what I would call the complex social dynamics of human being. In explaining their response, they say that we should,
“Stop trying to think our way out of what is too complex to be adequately understood, and seek the sources of rationality and ethical action in our uncertainty about most things, rather than in our knowledge about and control over just a few things.”
There are strong resonances here with Fortun and Bernstein’s Muddling Through, which I referred to in my previous post. To quote another extract from that book, “… this thing called modern industrial democracy happens in the absence of our full understanding. Immense, rapid changes occur ‘implicitly’, with practically no foreknowledge or planning. We don’t know how we got here, let alone what we might do. The fantasy of a social evolution that can be mastered , understood and guided may very well be just that: a fantasy.”