The Swan in the Machine

Wikipedia, quoting Nassim Taleb and noting his acknowledgement to Karl Popper, defines a Black Swan event as “a large-impact, hard-to-predict, and rare event beyond the realm of normal expectations” and makes the point that what distinguishes Taleb’s perception from previous work is not just the unpredictability but the high-impact of the event.

Black Swans have been occupying many distinguished minds for the past few weeks, in the wake of the global financial turmoil that followed the US Sub-Prime mortgage crisis, with the perception that a key component of the problem had been the mis-pricing, i.e. mis-appreciation, of the unsuspected – or hidden – risks involved.

The outlier “Black Swan” events and their unexpected knock-on effects were presented as yet another demonstration of the un-predictability of modern society and its achievements. This article argues that, although the specific event may have been unpredictable, the increased likelihood of such an event was not only predictable but was, in fact, inevitable.

The late Professor Claudio Ciborra wrote about risk and the way in which increased system complexity and integration increase the probability of and the damage caused by unexpected effects. He was writing on the specific topic of highly-integrated ICT systems, such as Enterprise Resource Planning (ERP) systems, widely used in the commercial and financial world, but he was influenced in his thinking by the work of those concerned with more general risks, such as Ulrich Beck, with his concept of the “Risk Society” (a society organised as a result of risks) and Anthony Giddens and his concept of “Reflexive Modernity” (where modernisation is constantly affected by more information appearing about the modernisation process itself). Ciborra concluded that an unavoidable effect of globalisation was the globalisation of side effects, i.e. higher risks and higher-impact consequences.

Management responses to what are seen as man-made problems are typically to seek increased control, either as the management maxim of “you cannot control what you cannot measure” or by more strategy formulation or increasingly futile attempts to enforce conformance with strategies or project plans. These views are based on a theory that turns out to be a practical house of cards – the myth of increased control minimising risks, especially when attempting to achieve ‘alignment’ between business re-engineering and the ICT that supports it. An increasing body of evidence points to the opposite conclusion: that increased control of human activities actually magnifies the risks. Furthermore, all too often, even when system failures are analysed, only single-loop learning is applied – “the plan was right, but we didn’t enforce it rigorously enough” – and the need for double-loop learning, questioning the plan or the need for the project itself, is ignored.

A disturbing, but hard-to-refute, conclusion of this observation is that the current global financial problems cannot be solved by legislation alone. In fact, a legislation-based response may actually be counter-productive. The effects of increased legal controls (Sarbanes-Oxley, etc) in response to a recent high-impact failures and headline-grabbing scandals are turning out to be not what was intended. Honest companies are finding the regulatory overhead is too high. It is turning out to be a disincentive to corporate activity and denounced as harmful to national competitiveness; perhaps a classic example of “legislate in haste, regret at leisure”.

Forty years ago, Arthur Koestler wrote in his book “The Ghost in the Machine” of mankind’s tendency towards self-destruction and the extent to which base emotions, such as hate and anger, can overwhelm more intellectual thought processes such as logic. He was writing in the 1960s and the fear then was global nuclear destruction but the basic premise is still valid, because it refers to the ghost of base human nature in the machine of logic or higher thought processes.

The ‘butterfly effect’ theory, which became popular a few years after Koestler’s book, spoke of the huge variation in outcomes that resulted from tiny changes in the initial conditions of highly complex systems like the weather, another topic which has been much debated recently. This, via chaos theory, and the work of many other researchers and authors, led to the specific notion of complexity, as opposed to complicated. Complicated systems, with enough effort, can be understood in the scientific sense of being able to predict how they will behave, i.e. they are deterministic. Even the largest and most powerful computer is still only complicated. Complex systems, by definition, cannot be understood or predicted in this way and can be described only by probability and for a limited time into the future, if they can be described at all. Biological and natural systems, like the weather, are complex. So are complex information systems because they have both deterministic computers and unpredictable human elements – analysts, programmers, integrators and users – and rely on human generated and mediated information.

Global business and financial empires now fall into the complex category. Although, individually, they may be merely complicated, the national and global inter-linking between the information they contain and the financial institutions and instruments on which they depend elevates them to the status of complex, with attempts at accounting descriptions of them becoming less factual and more risk or opinion-based. Off-balance sheet activities, conduits and SIVs increase the complexity, further reducing the ability for objective assessment, at both ends of the conduit, often either tacitly or intentionally. Accordingly, unexpected events can hardly be surprising. Even without the help of human deceit, the complexity alone means black swan events are inevitable.

Given that ICT is one of the main driving forces enabling globalisation, it is reasonable to draw on another IT-based example, the often-quoted Cobb’s Paradox which states, “We know why projects fail, we know how to prevent their failure – so why do they still fail?” (Martin Cobb, Treasury Board of Canada Secretariat, 1994). Ten reasons were identified but they can be summarised in a single word – “people”. The old ‘system’ view in the sense of O & M (organisation and methods), discarded in the rush to computerise organisations, may not have been so old-fashioned after all. People are the key to successful systems, not just in their design and development, which pre-occupies program and project managers, but also in their continued operation. Only one of the ten reasons offered to explain Cobb’s Paradox refers to the system users.

Professor Ciborra also wrote about the extent to which the success or failure of complex information systems are affected by the moods and feelings of the users who work with them – the financial world might recognise this as market sentiment but may not realise the extent of the effect beyond individual trading decisions.

I argue further: that a key determinant in the ongoing success of any information system is the extent and complexity of the interaction between the information and the users. This may, in turn, be affected by the mood of the users, the understanding of the original requirement, the quality of the implementation, the extent of management support, and so on, but I postulate that the degree of interaction complexity is the primary determinant because it reduces the predictable technology component of the system, increases the complexity and so magnifies the uncertainty. The greater the extent of the integration, i.e., the more the users depend on the system to do their work, the greater the potential for human weakness to disrupt the system – it is neither stabilising negative feedback nor destabilising positive feedback alone but can be either or both at the same time in varying degrees and in unpredictable ways at unpredictable times and with unpredictable consequences. The more users the systems serves and in turn relies on, the worse the situation becomes and the greater the impact of failures. Consider this in terms of the global financial markets and the corporate structures they enable and depend on. Also consider this in terms of the ease with which small, deliberate cyber-warfare disruptions could escalate into high-impact events. Black Swans can be intentional too.

Some years ago, there was a debate about how best to foster creativity, with the conclusion that it cannot be achieved directly, only indirectly, by creating an environment in which creativity can flourish. It turns out to be more like gardening than traditional management or government.

In the pre-literate world knowledge was derived from experience and observation of analogies and the perception was of unclear boundaries between the self and a world in which everything was magically connected to everything else. In the pervasively networked knowledge era society everything can, literally, be connected to everything else.
Perhaps, without realising it until now, by creating the highly complex, interlinked, globalised world, highly dependent on the subjective way we humans interact with it and the information on which it depends, we have created a fertile environment – a risk garden – for the black swan in the machine and so should not be so surprised when it appears. More worryingly, what if black swans are part of reflexive modernity and the emergence of one increases the probability of the next one? Maybe, we should be worrying less about a butterfly’s wings and more about a black swan’s wings...