Miscellaneous Ramblings

Critical thinking combined with evidence-based research is a powerful methodology for understanding and providing solutions in circumstances in which statistical research methodologies are either inappropriate or unsustainable. But if the facts are not recognised and accepted as facts and the problem is not recognised, understood or acknowledged, then no amount of critical thinking and evidence-based research will produce sustainable and valid solutions. This failure of reason is a now a very real issue in the modern world.

In this social media-driven world, most people don’t bother thinking, they simply rely on their intuition and heuristic responses as this is less like hard work: after all, they don’t have time to think and every moment is focused on reacting. Even amongst those who should be thinking, there appears to be a significant number for whom the classical empirical research methodology of developing a hypothesis after analysis of pre-existing theories, designing a data collection process, collecting data, analysing them using a statistically robust method, comparing and contrasting them with the results contained in the published literature before drawing conclusions is the only acceptable research model. This might be an acceptable approach with undergraduate and maybe graduate research but is hardly appropriate when considering post-graduate research, as such thinking rejects or choses to ignore what Albert Einstein called Gedankenexperiments (‘thought experiments’) or the use of critical thinking based on first principles and multiple sources of evidence to infer or deduce new knowledge and understanding.

Michael Scriven and Richard Paul suggested in 1987[1] that critical thinking…

… is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.
It entails the examination of those structures or elements of thought implicit in all reasoning: purpose, problem, or question-at-issue; assumptions; concepts; empirical grounding; reasoning leading to conclusions; implications and consequences; objections from alternative viewpoints; and frame of reference. Critical thinking — in being responsive to variable subject matter, issues, and purposes — is incorporated in a family of interwoven modes of thinking, among them: scientific thinking, mathematical thinking, historical thinking, anthropological thinking, economic thinking, moral thinking, and philosophical thinking.

Whilst Karen Robinson (2009)[2] considered evidence-based research as being ‘the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient and accessible manner’.

But it is the lack of critical thinking and evidence-based research and the over-reliance on empiricism that has, on numerous occasions, led to a failure of reasoning, resulting in some avoidable catastrophe occurring.

In his 2005 book, Collapse, How Societies Choose to Fail or Survive, Jared Diamond, an American polymathic geographer and anthropologist, describes a range of cultures and societies from modern USA, back through medieval and iron-age Viking Greenland, to the ancient statute-carving Polynesians of Easter Island, and examines the critical steps each takes that led to the collapse of their societies.

Diamond’s thinking leads him to postulate a four-step failure model in which he describes what happens when societies fail to recognise, refuse to consider, or simply cannot address the issues that face them, and which then fundamentally undermine the critical assumptions that are the foundations of the societal models in operation. These factors lead to failures in group decision-making, resulting in catastrophic failures, including failures to survive:

  • The failure to anticipate a problem before it arrives. This is similar to, and maybe a synonym for, the failure to understand that current short-term decisions may lead to future problems in what can be considered as the law of unintended consequences.
  • The failure to recognise as a problem a problem that has already arrived. This can also be a refusal to recognise a problem as a problem because, in the short term, it doesn’t appear to be a problem. This is often associated with the failure to learn from history.
  • The failure to attempt to solve a problem once it has arrived and been recognised. This is denial of, and a tendency to ‘turn a blind eye’ to, the issues. This often stems from the belief that things will resolve themselves or that someone else is responsible for solving the problem – a case of ‘passing the buck’.
  • The failure to solve a recognised problem due to the problem and /or the solution being beyond the capacity and competence of the group to solve or is not in their short-term self-interest to solve. This may include being beyond the available resources of the group or because the group has lost the skills to resolve or address the issue. It also occurs when the group has a short-term socio-economic need not to solve the problem: in other words, they would lose political or economic power by doing so.

This four-step approach, which I will call the Diamond Collapse Hypothesis, does not, however, only apply to the anthropological study of societal collapse, it also has an application in the study of the historical decline of empires, the collapse of companies, the defeat of armies and, on a micro scale, it is central to the success or failure of critical thinking. And it almost always starts with the ability or inability of people to process and understand reality, and to define and then take action that will lead to rational and desirable solutions.

Although the underpinning concept is originally attributed to René Descartes (1596-1650), the Santiago Theory of Cognition (1972) proposes that ‘we do not perceive the world we see, we see the world we perceive’: in other words we each of us have our own ‘reality’ and this is entirely autopoietic , i.e. it is created by ourselves as a result of our cumulative personal experience. The result of this is that we believe what our experience has taught us to believe, even when the evidence suggests this is wrong. This blind adherence to a fixed set of beliefs seems to be the very antithesis of critical thinking and evidence-based research, and may well lead us to reject ideas that are not aligned with our own autopoietic reality. This was dramatically illustrated by Hans Rosling (1946-2017), a Swedish physician and statistician, in a TED talk entitled How not to be ignorant about the world[3] in which he showed that people have a highly distorted understanding of reality, one that is not even randomly wrong.

According to Rosling, this inability to grasp reality is caused by personal bias, reinforced by outdated knowledge taught in schools and universities, and coupled with biased news reporting. And it is then further embedded in our brains by what Daniel Kahneman (b. 1934) called ‘loss aversion’ in which we value what we have twice as much as what me might gain from changing our minds: thus we are very reluctant to abandon our current reality even when it is made absolutely clear that it is wrong and we would be significantly better off with a new reality.

It is this that underpins the first two parts of the Diamond Collapse Hypothesis: the failure to anticipate a problem before it arrives and the failure to recognise as a problem a problem that has already arrived. Putting it simply, our personal reality makes it difficult for us to believe that our actions and ideas may logically (and thus more or less inevitably) lead to failure. And, of course, once that failure (the problem) has arrived we often have too much political, social or economic interest invested in our personal reality that we deny the problem exists. This is what economists would call the ‘sunk cost fallacy’ in which it is better to continue to invest in a failure than to abandon it, cut our losses, and re-think the situation.

This process has been clearly visible in the recent past with the 2007/2008 sub-prime loans crisis in the USA’s banking sector in which it was obvious what the logical outcome would be, but too many bankers had too much invested in a disastrous scenario to be able to abandon it[4]. It is also evident in the 2016-2017 Brexit situation, in which too many politicians and other Brexit supporters had too much at stake in pursuing Brexit to be able to step back, abandon the process, and re-think their relationship with Europe[5].  It reminds me of what one wise business leader said: ‘the golden rule of holes is to stop digging when you’re in one’.

The third and last part of the Diamond Collapse Hypothesis, the failure to attempt to solve a problem once it has arrived and been recognised, is founded on a well recognised but often misunderstood behavioural response to stress: that of denial. This comes from the work of Elisabeth Kübler-Ross (1926-2004), an American-Swiss psychiatrist whose pioneering work on terminally ill patients showed that when faced with a very unpalatable and deeply stressful future event, the mind created a behavioural response, which was to deny that the event applied to them. This denial was considered to be the first phase of a coping cycle that then led through anger, bargaining, depression, and finally to acceptance. This was subsequently built on by Colin Carnall (b. 1947), a management school professor, to illustrate people’s reaction to change[6] and I used it in my 2009 paper, From Comfort Zone to Performance Management. As denial turns to anger and then bargaining, this turning of a ‘blind eye’ to the problem appears to lead to two scenarios: the problem becomes undiscussable as it is ‘someone else’s problem to deal with’, or else it is thought to be unnecessary to deal with as it will simply go away if ignored.

This denial phase is evident in the USA in 2017 and 2018 in response to gun violence – it is undiscussable but it remains as a problem. It was also evident amongst British politicians in early 2018 during the Brexit ‘divorce’ negotiations between the UK and the EU when the subject of a ‘hard border’ in Ireland was raised. This was always going to be a problem if the UK were to leave the EU customs union and there are many who pointed this out even before the 2016 referendum to leave the EU: the problem was not recognised as a logical outcome of a course of action (Diamond Collapse Hypothesis stage 1), it was not recognised as a problem by the UK even when it was raised by the EU in early 2017 (Diamond Collapse Hypothesis stage 2), and by March 2018, the UK had still made no effort to come up with a workable and practical solution and continued to deny that it was a problem (there is even evidence in news reports of the time that the UK thinks it is someone else’s problem or that it can be negotiated away as part of a future trade agreement).

The fourth stage of the Diamond Collapse Hypothesis, the failure to solve a recognised problem due to the problem and/or the solution being either beyond the capacity and competence of the group to solve or is not in their short-term self-interest to solve, is the recognition that with the passing of time, the necessary knowledge, skills, attributes, competencies and capacity to solve the problem may have been lost: in other words, those tasked with solving the problem may simply not be able to solve the problem or it becomes simply too expensive to solve. At this stage the problem simply becomes overwhelming, the situation then collapses into total failure and has to be abandoned as unsustainable.

This may be the outcome for the Brexit negotiations since the UK believes that free trade agreements can be negotiated in two to three years despite all the historical data showing that, even when there are no impediments, trade deals take seven to ten years to complete. Thus the UK may cease being a member of the EU and, at the same time, have no trade deals in place with the inevitable consequence of a shrinking GDP and economy for a decade or so. This would be made worse by the fact that as of the beginning of 2018, the UK has no trade negotiation capacity or competency, as these have been lost due to not being involved in trade negotiations for many years whilst a member of the EU.

The inevitability of the progression of the Diamond Collapse Hypothesis can, of course, be avoided provided people engage in critical thinking and evidence-based research … but one has to admit that, as Kahneman talks about in his 2011 book Thinking, Fast and Slow, thinking is hard, and it is even harder for those not used to it, and it is far, far easier to react using our own preconceived ideas and autopoietic reality, even when they have nothing to do with the facts.

Alasdair White is a lecturer in Behavioural Economics at UIBS, a private business school with campuses in Brussels and Antwerp, as well as in Spain, Switzerland and Japan. He is the internationally respected and much cited author of a number of business and management books and papers, as well as a historian and authority on the Napoleonic era. 

[1] http://www.criticalthinking.org/pages/defining-critical-thinking/766, accessed 21 March 2018.

[2] Robinson, Karen A. Use of prior research in the justification and interpretation of clinical trials. The Johns Hopkins University, ProQuest, UMI Dissertations Publishing, 2009.

[3] https://www.gapminder.org/videos/how-not-to-be-ignorant-about-the-world/

[4] You would have thought a banker could spot a sunk cost fallacy a mile off, but no; they blindly carried on pouring money into these loans and eventually the whole bubble burst.

[5] The UK politicians, in particular, apparently felt that had they stepped back and said ‘this is wrong’ they would be unelectable for a generation.

[6] Managing Change in Organizations, Prentice Hall, 1990.

v.March2018

No one can have failed to notice the brouhaha over the UK’s EU exit referendum and it has been difficult to develop any real understanding, but in this blog, Prof. Alasdair White of the United International Business Schools in Belgium offers some ‘facts’ and voices his opinion.

The basic situation

The United Kingdom held a referendum which asked whether people thought Britain should be in or out of the EU. This referendum was advisory and legally non-binding and does not have to be acted upon. However, it will be politically difficult to ignore it.

The referendum does not trigger the ‘we’re leaving’ clause (Art.50) which can only be done through the correct constitutional process of the United Kingdom. There is absolutely nothing that the EU can do to ‘force’ the UK to send an Art.50 letter and the UK Prime Minister, David Cameron, has announced that, as he has resigned, he cannot do anything about Art.50, which now falls to his successor to deal with. Under the Conservative Party constitution, there is a due process that has to be gone through before a new leader can be announced and can take office, and the earliest this can happen is mid-September. Thereafter, the new Prime Minister has to activate the due parliamentary process to obtain legal authority to submit the Art.50 letter.

The due process is that a Bill has to be introduced into the UK Parliament calling for the repeal of the European Communities Act of 1972.  This then has to be passed by both Houses of Parliament, and until that happens, the new PM has no authority to instigate the Art.50 communication. It is highly likely, in the circumstances, that the earliest that the Art.50 ‘we’re leaving the EU’ letter can be sent is late September and more probably towards the end of this year.

Once the letter has been sent, the process of ‘divorce’ will start and that will take many years. Art.50 foresees a maximum of two years but also foresees that, with the agreement of the remaining Member States, this could take much longer. When the final agreement is reached, then, and only then, will the UK leave the EU. Until that happens, the earliest being in late 2018, the UK remains a full member of the EU with all the rights, privileges and obligations that entails, just as it was before the referendum.

No need to panic

This means there is no need to panic! Once the temporary shock of the announcement has calmed down and Brexit has been factored into the ‘markets’, things will continue much as before. However, in the short term it’s going to be an uncomfortable and possibly turbulent ride.

For entrepreneurs in the UK, Brexit presents risks and opportunities. If their business is focused on the home market (i.e. the current UK) then it is very probable that they will suffer little major impact.  On the other hand, if they are focused on exports (or are major importers), then their access to the world markets will be determined by what ‘deal’ has been struck in the leaving negotiations and what trade deals the UK has managed to put in place. The most likely scenario is that imports will cost more (and in some cases, a lot more) and exports will face tariffs that will make the goods more expensive in the markets.

A further complexity for UK entrepreneurs is that any EU funds from which they benefit in terms of market access, research, development, growth funds, etc., will cease and may well not be replaced by corresponding UK funds – the ability of the UK to replace those funds will be influenced by the state of the UK economy, which the ‘experts’ predict will be weaker, smaller, with higher taxes and lower government spending.

For entrepreneurs in Europe, their focus should be on where their market is. If they are exporters to the UK, or importers from the UK, then the new tariff regime resulting from what ever ‘deal’ is negotiated will have an influence.

All in all, European entrepreneurs will face significantly less risk as a result of Brexit than will those in the UK. They will also face significantly less risk to their funding and borrowing structures. And, frankly, there is time to plan how to handle the risks both in the EU and in the UK, but this needs to be thought about as soon as the UK sends in its Art.50 ‘we’re leaving the EU’ letter.

As far as opportunities are concerned, well, any major change provides opportunities provided there is flexibility of response. Business as normal will not be an option if Brexit becomes a reality but there is time to develop new products and new markets, as well as putting different funding structures in place providing action is taken as soon as Art.50 is triggered.

So, for entrepreneurs in both the UK and the EU there is no need for panic about Brexit, but there is a need to do a risk analysis and to plan a response to those risks. There is also a need to develop a new approach to the changed market conditions. However, there is time for some serious but unhurried thinking providing that thinking starts soon.

We live in ‘interesting times’, but for the enterprising business person there are opportunities to be taken as well as risks to be minimised.

Having looked at the irrationality of human economic behaviour, Alasdair White takes another look at consumerism and concludes that major changes in consumer behaviour have already occurred, and that consumerism as we have experienced it for the last 70 years is now effectively dead.

Human behaviour is a response to stimuli experienced. The actual behaviour demonstrated is the result of learned responses that are deemed rational or irrational, acceptable or unacceptable, within the norms of a society or other bounded environment within which the person exists, and we all respond differently to different stimuli depending on the experiences we have had in the past. As a result there is no robust evidence that human behaviour is predictable at the individual level, but there is evidence that socialisation, fashion and societal norms play a strong part in governing behaviours and channelling them into what can be considered acceptable or normal. Our economic behaviour is no different.

In the developed or ‘old’ economies around the world and especially in Europe and North America, the dominant economic behaviour of the last 70 years has been based on ‘consumerism’ and its closely related cousin, ‘materialism’. Consumerism is a model in which an ‘economic actor’, the individual consumer, purchases a good (or service) which he or she does not necessarily need, uses it, and then disposes of it before the end of its useful life, before buying a replacement. Materialism is a ‘greed’ model in which we ‘value’ ourselves and others based on the material possessions we or they possess. And there are those who would argue these are the directed result of ‘capitalism’.

Now, the fact that materialism and consumerism are most obvious in capitalistic environments, and generally do not arise to the same extent in non-capitalistic environments, does not make ‘capitalism’ a causation factor as both are behavioural responses to psychological stimuli that exist outside the narrow definitions of the models developed in an attempt to explain our economic behaviour.

Let’s unpack the concepts so that what is really happening becomes clearer. Consumerism has an economic actor – a human engaged in deciding to buy from a supply of goods or services which he/she may not ‘need’. Now, a ‘need’ is a good or service that is essential to the economic actor’s survival in the environment which he inhabits. At the basic and most fundamental level, a ‘need’ is a physical or emotional requirement without which the human will die: according to the 20th century American psychologist, Abraham Maslow, these include food and drink, protection from the elements and other dangers inherent in the environment, air to breath, warmth, sex and sleep. This is, perhaps, too restrictive a definition in today’s environment and has been too narrowly interpreted.  It has been suggested that such things as smartphones can also be considered ‘needs’, given that in the modern developed world it is virtually impossible to survive effectively without them – this is a moot point.

The definition then goes on to talk about ‘using the good’, which is self explanatory, and ‘disposing of the good before the end of its useful life’. Perhaps one of the most vivid examples of this is the above-mentioned smartphone: I frequently ask my graduate students about their smartphone and all claim it is, for them in their western European world, an essential, but all admit to having acquired at least one replacement smartphone before their existing one had ceased to function – indeed, most of my students are on their sixth or seventh smartphone.

Clearly for consumerism to exist there needs to be a large range of goods or services available but there also needs to be a ready supply of money in the form of discretionary disposable income. Disposable income is that part of the economic actor’s income (or funds) that is in excess of that which is required to fulfil real survival needs such as housing costs, food and accommodation and the whole range of what we have come to regard as the regular fixed expenses of our lives. Disposable income is, however, often subject to periodic spending cycles for such things as social needs and so, in this case, the term ‘discretionary’ refers to that proportion of the disposable income that the economic actor can choose to spend on goods or services that are non-periodic.

So far so obvious, perhaps, but in economic terms, discretionary disposable income is a direct result of the development of a middle-class. Prior to that, those at the bottom end of the economic scale, the vast majority of economic actors, had just enough income to enable them to cover their survival costs, and often not enough even for that. While those at the top of the economic scale, those who owned the business and the land, had plenty of discretionary disposable income but made up only a small fraction of the economic activity. Once a critical mass of middle-class is achieved, we see the development of those wishing to supply it.

In the 1930s, consumerism was a far-off event: the emergent middle-class had taken a battering with the Great Depression, the impoverished majority had little disposable income, never mind discretionary disposable income, and there was no real range of goods.

The start of World War Two changed all that and created all the conditions for consumerism to arise, first in the USA, then in Europe.

In the USA and the UK, the demands of warfare created a massive boom in innovation, both in terms of technology but also in methods of production with the adoption of advanced versions of Ford’s production-line manufacturing. It also brought a mass intake of women into the labour force … and these women found themselves for the first time as breadwinners, family decision-makers, and financially independent. All this played to the strengths of the ‘anglo-saxon’ national cultures (see the work of Geert Hofstede) with their high levels of individualism, independence, competitiveness and success-driven self-confidence, and their relatively low levels of hierarchical power distribution.

By the end of 1946, experience of the war had wrought a profound change in the societies of the USA (in particular) but also in the UK … they had won the war based on their industrial might and national characteristics and they wanted to reap the benefits. The men had put their lives on the line for six long years and wanted well-paid jobs that delivered a high sense of self-worth and delivered the material benefits of a changed world. The women, financially independent and used to making decisions, had no desire to return to the domestic existence that had been their lot pre-war. But for the manufacturing companies facing the end of the demand-driven boom years of the war, the change was potentially catastrophic: they urgently needed two things, (1) new product lines and (2) a strongly growing domestic demand plus export markets.

Manufacturing enterprises were the first to react and in a burst of innovation they quickly adapted the new technologies and production methods to producing items that matched the peace-time demands for labour-saving devices and the need to be released for domestic ties. Washing machines and refrigerators replaced tanks and planes, the vacuum-cleaner was developed, affordable cars replaced military vehicles, radio and then television were developed, and the whole lot were supported by the hard-sell strategies of the new advertising agencies as they ‘created’ the demand. All this was focused ruthlessly on the new aspiring middle-class families of suburban and small-town America … and they responded, creating an advertising-led, supply-driven or ‘push’ demand.

But there was a finite size to the domestic market and the US government was not slow in the creation of the Marshall Plan in which American funds were distributed to re-build war-ravished overseas economies so they could absorb the growing supply of US-made consumer goods. As early as the 1930s, manufacturers realised that selling just one unit to a buyer was wasteful and so designed their goods to become obsolete, no-longer functional and/or unfashionable after a certain time frame. By the late 1950s and throughout the 1960s, planned or built-in obsolescence was the norm, thus forcing buyers to continually upgrade or replace their appliances and goods. In 1960, Vance Packard, a cultural critic, published a book called The Waste Makers as an exposé of ‘the systematic attempt of business to make us wasteful, debt-ridden, permanently discontented individuals’ perpetually seeking the latest most fashionable version of products even though the current version still had life in it. Indeed, the post-war manufacturing boom created the consumer demand which led inexorably to the human economic actor becoming irrational to an extreme degree. Consumerism was a self-destructive and non-sustainable model but the consumers, faced with an ever growing and more sophisticated array of goods and unable to resist their debt-driven gorging, had accepted it as both desirable and inevitable.

The ability of advertisers to ‘create a demand’ for goods and the advances in miniaturisation and computing led to a second major wave of innovation in the 1990s, something that lasted until the early years of the 21st century. This second wave of innovation in manufactured goods started slowing before 2010 and is now finally well towards the bottom of the down-wave, and focus has shifted to making the current technologies do more with the development of software and ‘apps’.

Interestingly, politicians and economists have not fully understood the linkages inherent in the consumer model and it wasn’t until 2008, when the financial and banking sector imploded so spectacularly, that the debt-driven binge-buying that had created asset bubbles out of all proportion to their value finally registered with the consumer as being absolutely not in their self-interest, and they stopped spending, started saving, and reduced growth of western-style economies to almost zero. Inflation also shrank to almost 0%, something very much in the economic best interests of the consumer, but politicians beat their collective chests and appealed to the consumers to start spending again. Regulation was introduced to encourage individuals to return to being in debt, economists bleated about the need for inflation, politicians told their electorate they were being irresponsible by not spending or wasting their money … but the consumers have been resolute, mostly, and the great over-inflated consumer bubble has burst.

And with that burst bubble has come the inevitable result: manufacturing companies needed to downsize, stock piles grew and pushed prices down even further (very much in the rational self-interest of the human economic actor), manufacturers are more or less having to give away their goods just to generate cash-flow, technology and innovation has created new ways of doing business so there is a glut of commercial properties, both office and retail, people are using the technology to shop on-line and are better informed about the products thus making retail space much less important, which in itself results in the current glut of empty high-street shops, and even the huge generalist retailers are experiencing no or slowing growth, opening the way for artisanal and specialist retailers closely linked to their manufacturing base.

Having abandoned the self-destructive and irrational behaviour that drove the consumer model, the modern consumer is now more picky about what they buy and what service they expect, we have moved solidly away from a supply-side-driven push environment to a demand-side driven one that is opening the developed economies in a way that allows those manufacturers using just-in-time methodologies to dominate. The critical decision-making factors are now about life-style choices –consumerism is not dead nor will it die, but consumers are more critically aware, better informed, willing to spend but demanding better durability, better performance and better reliability for their money. Personal debt mountains are shrinking, inflation-driven asset markets such as housing are slowing, and the economic world has changed profoundly.

Alasdair White is a professional change management practitioner, business school professor, and behavioural psychologist with a keen interest in behavioural economics. He is the author of four best-selling business and management books and, under the pen name of Alex Hunter, has published two thrillers. April 2016