Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Management’ Category

Conversations and commitments: an encounter with emergent design 

leave a comment »

Many years ago, I was tasked with setting up an Asia-based IT development hub for a large multinational.   I knew nothing about setting up a new organisation from scratch. It therefore seemed prudent to take the conventional route – i.e., engage experts to help.

I had conversations with several well-known consulting firms. They exuded an aura of confidence-inspiring competence and presented detailed plans about how they would go about it. Moreover, they quoted costs that sounded very reasonable.  

It was very tempting to outsource the problem.

–x–

Expert-centric approaches to building new technical capabilities are liable to fail because such initiatives often display characteristics of wicked problems,  problems that are so complex and multifaceted that they are difficult to formulate clearly, let alone solve. This is because different stakeholder groups have different perspectives on what needs to be done and how it should be done.

The most important feature of such initiatives is that they cannot be tackled using rational methods of planning, design and implementation that are taught in schools, propagated in books, and evangelized by standards authorities and snake oil salespeople big consulting firms.

This points to a broader truth that technical initiatives are never purely technical; they invariably have a social dimension. It is therefore more appropriate to refer to them as sociotechnical problems.

–x–

One day, not long after my conversations with the consulting firms, I came across an article on Oliver Williamson’s Nobel prize winning work on transaction costs. The arguments presented therein drew my attention to the hidden costs of outsourcing.

The consultants I’d spoken with had included only upfront costs, neglecting the costs of coordination, communication, and rework. The outsourcing option would be cost effective only if the scale was large enough. The catch was that setting up a large development centre from scratch would be risky, both politically and financially. There was too much that could go wrong.

–x–

Building a new sociotechnical capability is a process of organisational learning. But learning itself is a process of trial and error, which is why planned approaches to building such capabilities tend to fail. 

All such initiatives are riddled with internal tensions that must be resolved before any progress can be made. To resolve these tensions successfully one needs to use an approach that respects the existing state of the organisation and introduces changes in an evolutionary manner that enables learning while involving those who will be affected by the change.  Following David Cavallo, who used such an approach in creating innovative educational interventions in Thailand, I call this process emergent design.

–x–

The mistake in my thinking was related to the fallacy of misplaced concreteness. I had been thinking about the development hub as a well-defined entity rather than an idea that needed to fleshed out through a process of trial and error. This process would take time; it had to unfold in small steps, through many interactions and conversations.

It became clear to me that it would be safest to start quietly, without drawing much attention to what I was doing. That would enable me to test assumptions, gauge the organisation’s appetite for the change and, most importantly, learn by trial and error.

I felt an opportunity would present itself sooner than later.

–x–

In their book, Disclosing New Worlds, which I have discussed at length in this post, Spinosa et. al. note that:

“[organisational] work [is] a matter of coordinating human activity – opening up conversations about one thing or another to produce a binding promise to perform an act … Work never appears in isolation but always in a context created by conversation.”

John Shotter and Ann Cunliffe flesh out the importance of conversations via their notion of managers as authors [of organisational reality].  Literally, managers create (or author) realities through conversations that help people make sense of ambiguous situations and / or open up new possibilities.

Indeed, conversations are the lifeblood of organisations. It is through conversations that the myriad interactions in organisational life are transformed into commitments and thence into actions.

–x–

A few weeks later, a work colleague located in Europe called to catch up. We knew each other well from a project we had worked on a few years earlier. During the conversation, he complained about how hard it was to find database skills at a reasonable cost.

My antennae went up. I asked him what he considered to be a “reasonable cost.” The number he quoted was considerably more than one would pay for those skills at my location.  

“I think I can help you,” I said, “I can find you a developer for at most two thirds that cost here. Would you like to try that out for six months and see how it works?” 

“That’s very tempting,” he replied after a pause, “but it won’t work. What about equipment, workspace etc.? More important, what about approvals.” 

“I’ll sort out the workspace and equipment,” I replied, “and I’ll charge it back to your cost centre. As for the approval, let’s just keep this to ourselves for now. I’ll take the rap if there’s trouble later.” 

He laughed over the line. “I don’t think anyone will complain if this works. Let’s do it!” 

–x–

As Shotter and Cunliffe put it, management is about acting in relationally responsive ways. Seen in that light, conversations are more than just talk; they are about creating shared realities that lead to action.

How can one behave in a relationally responsive way? As in all situations involving human beings, there are no formulas, but there are some guiding principles that I have found useful in my own work as a manager and consultant:

Be a midwife rather than an expert:  The first guideline is to realize that no one is an expert – not you nor your Big $$$ consultant. True expertise comes from collaborative action.  The role of the midwife is to create and foster the conditions for collaborative action to occur.  

Act first, seek permission later (but exercise common sense): Many organisations have a long list of dos and don’ts. A useful guideline to keep in mind is that it is usually OK to launch exploratory actions as long as they are done in good faith, the benefits are demonstrable and, most importantly, the actions do not violate ethical principles. The dictum that it is easier to beg forgiveness than seek permission has a good deal of truth to it. However, you will need to think about the downsides of acting without permission in the context of your organisation, its tolerance for risk and the relationships you have with management.

Do not penalize people for learning:  when setting up new capabilities, it is inevitable that things will go wrong.  If you’re at the coalface, you will need to think about how you will deal with the fallout. A useful approach is to offer to take the rap if things go wrong. On the other hand, if you’re a senior manager overseeing an initiative that has failed, look for learnings, not scapegoats.

Distinguish between wicked and tame elements of your initiative: some aspects of sociotechnical problems are wicked, others are straightforward (or tame). For example, in the case of the development centre, the wicked element was how to get started in a way that demonstrated value both to management and staff. The tame elements were the administrative issues: equipment, salary recharging etc (though, as it turned out, some of these had longer term wicked elements – a story to be told later perhaps).

Actively seek other points of view: Initially, I thought of the development centre in terms of a large monolithic affair. After talking to consultants and doing my own research, I realised there was another way.

Understand the need for different types of thinking: related to the above, it is helpful to surround yourself with people who think differently from you.

Consider long term consequences:  Although it is important to act (the second point made above), it is also important to think through the consequences of one’s actions, the possible scenarios that might result and how one will deal with them.

Act so as to increase your future choices: This principle is from my intellectual hero, Heinz von Foerster, who called it the ethical imperative (see the last line of this paper). Given that one is acting in a situation that is inherently uncertain (certainly the case when one is setting up a new sociotechnical capability), one should be careful to ensure that one’s actions do not inadvertently constrain future choices.

–x–

With some trepidation, we decided to go ahead with the first hire.

A few months later, my colleague was more than happy with how things were going and started telling others about it. Word got around the organisation; one developer became three, then five, then more. Soon I was receiving more enquiries and requests than our small makeshift arrangement could handle. We had to rent dedicated office space, fit it out etc, but that was no longer a problem because management saw that it made good business sense.

–x–

This was my first encounter with emergent design. There have been many others since – some successful, others less so.   However, the approach has never failed me outright because a) the cost of failure is small and b) learnings gained from failures inform future attempts.

Although there are no set formulas for emergent design, there are principles.  My aim in this piece was to describe a few that I have found useful across different domains and contexts. The key takeaway is that emergent design increases one’s chances of success because it eschews expert-driven approaches in favour of practices tailored to the culture of the organisation.

 As David Cavallo noted, “rather than having the one best way there can now be many possible ways. Rather than adapting one’s culture to the approach, one can adapt the approach to one’s culture.

–x–x–

Written by K

September 14, 2021 at 4:43 am

Making sense of management – a conversation with Richard Claydon

with one comment

KA 

Hi there. I’m restarting a series of conversations that I’d kicked off in 2014 but discontinued a year later for a variety of reasons. At that time, I’d interviewed a few interesting people who have a somewhat heretical view on things managers tend to take for granted. I thought there’s no better way to restart the series than to speak with Dr. Richard Claydon, who I have known for a few years.  Richard calls himself a management ironist and organisational misbehaviorist. Instead of going on and risking misrepresenting what he does, let me get him to jump in and tell you himself.

Welcome Richard, tell us a bit about what you do.

RC  

I position myself as having a pragmatic, realistic take on management. Most business schools have a very positivistic take on the subject, a “do A and get B” approach. On the other hand, you have a minority of academics – the critical theorists – who say, well actually if you do A, you might get B, but you also get C, D, E, F, G.  This is actually a more realistic take. However, critical management theory is full of jargon and deep theory so it’s very complex to understand. I try to position myself in the middle, between the two perspectives, because real life is actually messier than either side would like to admit.

I like to call myself a misbehaviourist because the mess in the middle is largely about misbehaviours – real but more often, perceived. Indeed, good behaviours are often misperceived as bad and bad behaviours misperceived as good. I should emphasise that my work is not about getting rid of the bad apples or performance managing people. Rather it’s about working out what people are doing and more importantly, why. And from that, probing the system and seeing if one can start effecting changes in behaviours and outcomes.

KA 

Interesting! What kind of reception do you get? In particular, is there an appetite for this kind of work – open ended with no guarantee of a results?

RC 

Six of one half a dozen or the other. I’ve noticed a greater appetite for what I do now than there was six or seven years ago. It might be that I’ve made what I do more digestible and more intelligible to people in the management space. Or it might be that people are actually recognising that what they’re currently doing isn’t working in the complex world we live in today. It’s probably a bit of both.

That said, I definitely think the shift in thinking has been accelerated by the pandemic. It’s sort of, we can’t carry on doing this anymore because it is not really helping us move forward. So, I am finding a larger proportion of people willing to explore new approaches.

KA 

Tell us a bit about the approaches you use.

RC 

As an example, I’ve used narrative analytics –   collecting micro narratives at massive scale across an organisation and then analysing them, akin to the stuff Dave Snowden does.  Basically, we collect stories across the organisation, cluster them using machine learning techniques, and then get a team of people with different perspectives to look at the clusters. This gives us multiple readings on meaning. So, the team could consist of someone with leadership expertise, someone with expertise in mental health and wellbeing, someone with a behavioural background etc.

We also use social network analysis to find how information flows within a organisation. The aim here is to identify three very different types of characters: a) blockers – those who stop information from flowing, b) facilitators of information flow, c) connectors – information hubs, the go-to people in the organisation and d) mavericks, those who are thinking differently. And if you do that, you can start identifying where interesting things are happening, where different thinking is manifesting itself, and who’s carrying that thinking across the organisation.

KA 

Interesting! What sort of scale do you do this at?

RC 

Oh, we can scale to 1000s of people – organisations that have 35000 to 40,000 people – well beyond the scale at which one can wander around and do the ethnography oneself.

KA 

How do you elicit these micro-narratives?

RC 

I’ll give you an example. For a study we did on remote working during COVID we simply wrote, when it comes to working from home in COVID, I like dot dot, dot, I don’t like dot dot, dot, I wish dot dot, dot, I wonder dot dot dot,  plus some metadata to slice and dice – age bands, gender etc.  Essentially, we try to ask a very open set of questions, to get people into a more reflective stance. That’s where you begin to get some really interesting stuff.

KA 

Can you tell us about some of the interesting things you found from this study?  The more, I guess, interesting and surprising things that you’ve seen that are  perhaps not so obvious from a cursory glance,

RC 

The one thing that was very clear from the COVID studies was that the organisation’s perception of work from home was the key to whether it actually worked or not. If management gives the impression that work from home is somehow not quite proper work, then you’re going to get a poor work from home experience for all. If management isn’t trusting a person to work from home, or isn’t trusting a team to work from home then you’ve got a problem with your management, not with your people. The bigger the trust gap, the worse the experience. Employees in such environments feel more overwhelmed, more isolated, and generally more limited and restricted in their lives. That was the really interesting finding that came out of this piece of work. 

KA 

That’s fascinating…but I guess should not be surprising in hindsight. Management attitudes play a large role in determining employee behaviours and attitudes, and one would expect this to be even more the case when there is less face-to-face interaction. This is also a nice segue into another area I’d like to get you to talk about:  the notion of organisational culture.  Could you tell us about your take on the concept?

RC 

How cynical do you want me to be?

KA 

Very, I expect nothing less!

RC 

Well, if you go back into why culture became such a big thing, the first person who talked about culture in organisations was Elliott Jaques, way back in the 50s. But it didn’t really catch on then. It became a thing in the early 80s. And how it did is a very interesting story.

Up until the early 70s, you had – in America at least – a sort of an American Dream being lived underpinned by the illusion of continuous growth.  Then came the challenges of the 70s, the oil crisis and numerous other challenges that resulted in a dramatic loss of confidence in the American system. At the same time, you had the Japanese miracle, where a country that had two nuclear bombs dropped on it thirty years earlier was, by the 1970s, the second biggest economy in the world. And there was this sort of frenzy of interest in what the Japanese were doing to create this economic miracle and, more important, what America could learn from it. There were legions of consultants and academics going back and forth between the two countries.

One of the groups that was trying to learn from the Japanese was McKinsey. But this wasn’t really helping build confidence in the US. On the contrary, this approach seemed to imply that the Japanese were in some way better, which didn’t go down particularly well with the local audience. There was certainly interest in the developments around continuous improvement,  The Toyota Way etc – around getting the workers involved with the innovation of products and processes, as well as the cultural notions around loyalty to the organisation etc.  However, that was not enough to excite an American audience.

The spark came from Peters and Waterman’s  book, In Search of  Excellence, which highlighted examples of American companies that were doing well.  The book summarised eight features that these companies had in common – these were labelled principles of a good culture and that’s where the Mckinsey Seven S model came from. It was a kind of mix of ideas pulled in from Peters/Waterman, the Japanese continuous improvement and culture stuff, all knocked together really quite quickly.  In a fortunate (for Peters and Waterman) coincidence, the US economy turned the corner at around the time that this book was published and sales took off. That said, it’s a very well written book. The first half of In Search of Excellence is stunning. If you read it you’ll see that the questions they asked then are relevant questions even today. Anyway, the book came out at exactly the right time: the economy had turned the corner, McKinsey had a Seven S model to sell and then two universities jumped into the game, Stanford and Harvard… and lo behold, organisational culture became a management buzz-phrase, and  remains so to this day.  Indeed, the idea that special cultures are driving performance has bubbled up again in recent years, especially in the tech sector. In the end, though, the notion of culture  is very much a halo effect, in that the proponents of culture tend to attribute  performance to certain characteristics (i.e. culture). The truth is that success may give rise to a culture, but there is no causal effect the other way round.

KA 

Thanks for that historical perspective. In my experience in large multinationals, I’ve found that the people who talked about culture the most were from HR. And, they were mostly concerned about enforcing a certain uniformity of thought across the organisation.  That was around that time I came across the work of some critical management scholars who you alluded to at the start of this conversation. In particular, Hugh Willmott’s, wonderful critique of organisational culture : strength is ignorance; slavery is freedom. I thought that was a brilliant take on why people tend to push back on HR driven efforts to enforce a culture mindset- the  workshops and stuff that are held to promote it. I’m surprised that people in high places continue to be enamoured by this concept when they really should know better, having come up through the ranks themselves.

RC 

Yea, the question is whether they have come through the ranks themselves. A lot of them have come through MBA programmes or have been parachuted in. This is why, when I teach in the MBA, I try to teach this wider appreciation of culture because I know what the positivists are teaching – they are telling their students that culture is a good lever to get the kind of desirable behaviours that managers want.

KA 

Totally agree, the solution is to teach diverse perspectives instead of the standard positivist party line. I try to do the same in my MBA decision-making class – that is, I challenge the positivistic mindset by drawing students’ attention to the fact that in real life, problems are not given but have to be taken from complex situations (to paraphrase Russell Ackoff). Moreover, how one frames the problem determines the kind of answer one will get. Analytical decision-making tools assume the decision problem is given, but one is never given a problem in real life. So, I spend a lot of time teaching sensemaking approaches that can help students extract problems from complex situations by building context around the situation.

Anyway, we’ve been going for quite a bit, there’s one thing I absolutely must touch upon before we close this conversation – the use of irony in management. I know, your PhD work was around this concept, and it’s kind of an unusual take. I’m sure my readers would be very interested to hear more about your take on irony and why it’s useful in management.

RC 

I think we’ve set the stage quite nicely in terms of the cultural discussion. So what I was looking at in my PhD was a massive cultural change in an Australian company, a steelworks. We had unfettered access to the company for six and a half years, which is kind of unheard of. So anyway, one of the interesting things we noticed during our fieldwork was that everybody was identifying the same group of people as being the ones that were giving them the best information, were the easiest to talk, had the most  useful data sources, etc.

We then noticed that these people seemed to have an ironic sensibility. What does that mean? Well, they poked fun at themselves, their teammates, managers and the organisation…and indeed, even our research, but in very subtle ways. However, these people were also doing their work exceptionally well: they had more knowledge about what the hell was going on than anybody else in the company. Everybody liked them, everybody wanted to work with them, everybody was coming to them as problem solvers. You know, they had all of this interesting stuff happening around them.

So, what does it mean to have an ironic stance or an ironic sensibility in the midst of a shifting culture while doing quite complex work in challenging conditions? Well, there are three elements to it, firstly there’s there’s a perspective that you take, secondly there’s a performance that you give, and thirdly there’s a personality or character you develop.

The ironic perspective is that you see the gap between the rhetoric and reality, you see the gaps that most others do not. Then you’ve got this feeling that maybe it’s only you that sees the gap, and that can be quite scary. Especially if you’re trying to transmit that there’s a gap to powerful people who haven’t seen it,  and may even think everything’s going well.

How do you do this without losing your head?  And I mean that both literally (as in going crazy) and metaphorically as in losing your job.

That’s where the ironic performance comes in  – you say one thing while actually meaning something else. You’re trying to get people to deconstruct your message and work out where the gap is for themselves rather than confronting them with it and saying, “look, here is the gap”. So, this is where all the witticisms and the play on words and the humour come in. These are devices through which this message is transmitted in a way that helps the ironist keep her head – both metaphorically and in terms of her own sanity. These people are critical to the organisation because they call things out in a way that is acceptable. Moreover, since such people also tend to be good at what they do, they tend to have an outsized influence on their peers as well as on management.

So, our argument was that these folks with an ironic sensibility, they’re not just useful to have around they’re absolutely vital, and you should do everything you can to find them and look after them in the contemporary organisation.

KA 

So, there’s a clear distinction between a cynical and an ironic personality, because the cynic will call it out quite bluntly, in a way that puts people off. The ironists get away with it because they call it out in a very subtle way that could be even construed as not calling it out. It requires a certain skill and talent to do that.

RC 

Yes, and there’s a different emotional response as well. The cynic calls it out and hates it; the ironist expects it and takes joy in its absurdity.

KA 

So, the ironist is a bit like the court jester of yore: given licence to call out bullshit in palatable, even entertaining ways.

RC 

I like that. The original ironist was Socrates – pretending to be this bumbling fool but actually ridiculously sharp. The pretence is aimed at exposing an inconsistency in the others’ thinking, and to start a dialogue about it. That’s the role the ironist plays in achieving change.

KA 

That’s fascinating because it ties in with something I’ve noticed in my travels through various organisations. I do a lot of dialogic work with groups – trying to use conversations to frame different perspectives on complex situations. When doing so I’ve often found that the people with the most interesting things to say will have this ironic sensibility – they are able to call out bullshit using a memorable one-liner or gentle humour, in a way that doesn’t kill a conversation but actually encourages it.  There is this important dialogic element to irony.

RC 

It’s what they call the soft irony of Socrates – the witticisms and the elegance that keeps a difficult conversation going for long enough to surface different perspectives. The thing is you can keep going because in a complex situation there isn’t a single truth or just one right way of acting.

KA 

It gets to a possible way of acting. In complex situations there are multiple viable paths and the aim of dialogue is to open up different perspectives so that these different paths become apparent. I see that irony can be used to draw attention to these in a memorable way.  These ironists are revolutionaries of sorts, they have a gift of the gab, they’re charismatic, they are fun to talk to. People open up to them and engage with them, in contrast to cynics whose bitterness tends to shut down dialogue completely.

RC 

Yeah, and the conversation can continue even when the ironists depart. As an extreme example, Socrates chose to die in the final, ironic act of his life. Sure he was old and his time was coming anyway, but the way he chose to go highlighted the gap between principles and practice in Athens in an emphatic way. So emphatic that we talk about it now, millenia later.   

The roll call is long:  Socrates drank hemlock, Cicero was murdered, Voltaire was exiled, Oscar Wilde went to jail, Jonathan Swift was sent to a parish in the middle of Ireland – and so on. All were silenced so that they wouldn’t cause any more trouble. So there’s always a risk that however witty, however elegant your rhetoric, and however hard you try to keep these conversations going and get people to see the gap, there’s always a risk that a sword will be plunged into your abdomen.

KA 

The system will get you in the end, but the conversation will continue! I think that’s a great note on which to conclude our chat.  Thanks very much for your time, Richard.  I really enjoyed the conversation and learnt a few things, as I always do when chatting with you.

RC 

It’s been a pleasure, always wonderful to talk to you.

Written by K

March 29, 2021 at 7:35 pm

Learning, evolution and the future of work

leave a comment »

The Janus-headed rise of AI has prompted many discussions about the future of work.  Most, if not all, are about AI-driven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines.  This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automation-proof in the medium-term future.

This begs the question as to what we can do, as individuals, to prepare for a machine-centric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as  my work and teaching experiences.

Levels of learning

Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.

So what can one do to prepare for an ambiguous and essentially unpredictable future?

To answer this question, we need to delve into an important, yet oft-overlooked aspect of learning.

A key characteristic of learning is that it is driven by trial and error.  To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight.  Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.

In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:

If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.

Let’s unpack this claim by looking at his proposed classification:

Zero order learning –    Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning.  Such learning is hard-wired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.

First order learning (Learning I) –  Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The  canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.

A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning  cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetition-based process of trial and error.

As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.

Second order learning (Learning II) –  Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions.  Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.

Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.

Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”   Two  aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of  the situation.  Secondly,  once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.

Sensemaking and second order learning

In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail.   Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand.  The greater the diversity, the more complete the exploration of the problem space.

Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem.  As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.”  But this begs the question as to how these alternatives are to be determined.  The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives.  Surfacing these by asking the right questions is the task of sensemaking.

As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:

  • What kind of business are you in?
  • Who are your customers?
  • What’s the competitive landscape?
  • …and so on

Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.

It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context:  Kodak is a well-known example, there are many more.  So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together.  Such co-evolution is as true of natural systems as it is of social ones. As Bateson tells us:

…the evolution of the horse from Eohippus was not a one-sided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.

Indeed, one can think of evolution by natural selection as a process by which nature learns (in a second-order sense).

The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that second-order learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes.  In view of this it seems more practical for organisations to focus on developing what one might call  learning individuals – people who are capable of adapting to changes in their environment by continual learning

Learning to learn

Cliches aside, the modern workplace is characterised by rapid, technology-driven change. It is difficult for an  individual to keep up because one has to:

    • Figure out which changes are significant and therefore worth responding to.
    • Be capable of responding to them meaningfully.

The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence.  One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?

The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important second-order skill of learning to learn has fallen by the wayside.  In reality, though,  it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.

Back to the future of work

When industry complains about new graduates being ill-prepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and  Bateson’s notion of second order learning tells us why:

  • Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
  • Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn

In my experience,  it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my  consulting work.  Two examples may help illustrate how this works in  the classroom:

  • Many analytically-inclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
  • When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a two-step approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms.  The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful co-teachers,  Alex and Chris, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.

In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:

Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.

The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.

Written by K

July 5, 2018 at 6:05 pm

%d bloggers like this: