Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Management’ Category

Making sense of management – a conversation with Richard Claydon

with one comment

KA 

Hi there. I’m restarting a series of conversations that I’d kicked off in 2014 but discontinued a year later for a variety of reasons. At that time, I’d interviewed a few interesting people who have a somewhat heretical view on things managers tend to take for granted. I thought there’s no better way to restart the series than to speak with Dr. Richard Claydon, who I have known for a few years.  Richard calls himself a management ironist and organisational misbehaviorist. Instead of going on and risking misrepresenting what he does, let me get him to jump in and tell you himself.

Welcome Richard, tell us a bit about what you do.

RC  

I position myself as having a pragmatic, realistic take on management. Most business schools have a very positivistic take on the subject, a “do A and get B” approach. On the other hand, you have a minority of academics – the critical theorists – who say, well actually if you do A, you might get B, but you also get C, D, E, F, G.  This is actually a more realistic take. However, critical management theory is full of jargon and deep theory so it’s very complex to understand. I try to position myself in the middle, between the two perspectives, because real life is actually messier than either side would like to admit.

I like to call myself a misbehaviourist because the mess in the middle is largely about misbehaviours – real but more often, perceived. Indeed, good behaviours are often misperceived as bad and bad behaviours misperceived as good. I should emphasise that my work is not about getting rid of the bad apples or performance managing people. Rather it’s about working out what people are doing and more importantly, why. And from that, probing the system and seeing if one can start effecting changes in behaviours and outcomes.

KA 

Interesting! What kind of reception do you get? In particular, is there an appetite for this kind of work – open ended with no guarantee of a results?

RC 

Six of one half a dozen or the other. I’ve noticed a greater appetite for what I do now than there was six or seven years ago. It might be that I’ve made what I do more digestible and more intelligible to people in the management space. Or it might be that people are actually recognising that what they’re currently doing isn’t working in the complex world we live in today. It’s probably a bit of both.

That said, I definitely think the shift in thinking has been accelerated by the pandemic. It’s sort of, we can’t carry on doing this anymore because it is not really helping us move forward. So, I am finding a larger proportion of people willing to explore new approaches.

KA 

Tell us a bit about the approaches you use.

RC 

As an example, I’ve used narrative analytics –   collecting micro narratives at massive scale across an organisation and then analysing them, akin to the stuff Dave Snowden does.  Basically, we collect stories across the organisation, cluster them using machine learning techniques, and then get a team of people with different perspectives to look at the clusters. This gives us multiple readings on meaning. So, the team could consist of someone with leadership expertise, someone with expertise in mental health and wellbeing, someone with a behavioural background etc.

We also use social network analysis to find how information flows within a organisation. The aim here is to identify three very different types of characters: a) blockers – those who stop information from flowing, b) facilitators of information flow, c) connectors – information hubs, the go-to people in the organisation and d) mavericks, those who are thinking differently. And if you do that, you can start identifying where interesting things are happening, where different thinking is manifesting itself, and who’s carrying that thinking across the organisation.

KA 

Interesting! What sort of scale do you do this at?

RC 

Oh, we can scale to 1000s of people – organisations that have 35000 to 40,000 people – well beyond the scale at which one can wander around and do the ethnography oneself.

KA 

How do you elicit these micro-narratives?

RC 

I’ll give you an example. For a study we did on remote working during COVID we simply wrote, when it comes to working from home in COVID, I like dot dot, dot, I don’t like dot dot, dot, I wish dot dot, dot, I wonder dot dot dot,  plus some metadata to slice and dice – age bands, gender etc.  Essentially, we try to ask a very open set of questions, to get people into a more reflective stance. That’s where you begin to get some really interesting stuff.

KA 

Can you tell us about some of the interesting things you found from this study?  The more, I guess, interesting and surprising things that you’ve seen that are  perhaps not so obvious from a cursory glance,

RC 

The one thing that was very clear from the COVID studies was that the organisation’s perception of work from home was the key to whether it actually worked or not. If management gives the impression that work from home is somehow not quite proper work, then you’re going to get a poor work from home experience for all. If management isn’t trusting a person to work from home, or isn’t trusting a team to work from home then you’ve got a problem with your management, not with your people. The bigger the trust gap, the worse the experience. Employees in such environments feel more overwhelmed, more isolated, and generally more limited and restricted in their lives. That was the really interesting finding that came out of this piece of work. 

KA 

That’s fascinating…but I guess should not be surprising in hindsight. Management attitudes play a large role in determining employee behaviours and attitudes, and one would expect this to be even more the case when there is less face-to-face interaction. This is also a nice segue into another area I’d like to get you to talk about:  the notion of organisational culture.  Could you tell us about your take on the concept?

RC 

How cynical do you want me to be?

KA 

Very, I expect nothing less!

RC 

Well, if you go back into why culture became such a big thing, the first person who talked about culture in organisations was Elliott Jaques, way back in the 50s. But it didn’t really catch on then. It became a thing in the early 80s. And how it did is a very interesting story.

Up until the early 70s, you had – in America at least – a sort of an American Dream being lived underpinned by the illusion of continuous growth.  Then came the challenges of the 70s, the oil crisis and numerous other challenges that resulted in a dramatic loss of confidence in the American system. At the same time, you had the Japanese miracle, where a country that had two nuclear bombs dropped on it thirty years earlier was, by the 1970s, the second biggest economy in the world. And there was this sort of frenzy of interest in what the Japanese were doing to create this economic miracle and, more important, what America could learn from it. There were legions of consultants and academics going back and forth between the two countries.

One of the groups that was trying to learn from the Japanese was McKinsey. But this wasn’t really helping build confidence in the US. On the contrary, this approach seemed to imply that the Japanese were in some way better, which didn’t go down particularly well with the local audience. There was certainly interest in the developments around continuous improvement,  The Toyota Way etc – around getting the workers involved with the innovation of products and processes, as well as the cultural notions around loyalty to the organisation etc.  However, that was not enough to excite an American audience.

The spark came from Peters and Waterman’s  book, In Search of  Excellence, which highlighted examples of American companies that were doing well.  The book summarised eight features that these companies had in common – these were labelled principles of a good culture and that’s where the Mckinsey Seven S model came from. It was a kind of mix of ideas pulled in from Peters/Waterman, the Japanese continuous improvement and culture stuff, all knocked together really quite quickly.  In a fortunate (for Peters and Waterman) coincidence, the US economy turned the corner at around the time that this book was published and sales took off. That said, it’s a very well written book. The first half of In Search of Excellence is stunning. If you read it you’ll see that the questions they asked then are relevant questions even today. Anyway, the book came out at exactly the right time: the economy had turned the corner, McKinsey had a Seven S model to sell and then two universities jumped into the game, Stanford and Harvard… and lo behold, organisational culture became a management buzz-phrase, and  remains so to this day.  Indeed, the idea that special cultures are driving performance has bubbled up again in recent years, especially in the tech sector. In the end, though, the notion of culture  is very much a halo effect, in that the proponents of culture tend to attribute  performance to certain characteristics (i.e. culture). The truth is that success may give rise to a culture, but there is no causal effect the other way round.

KA 

Thanks for that historical perspective. In my experience in large multinationals, I’ve found that the people who talked about culture the most were from HR. And, they were mostly concerned about enforcing a certain uniformity of thought across the organisation.  That was around that time I came across the work of some critical management scholars who you alluded to at the start of this conversation. In particular, Hugh Willmott’s, wonderful critique of organisational culture : strength is ignorance; slavery is freedom. I thought that was a brilliant take on why people tend to push back on HR driven efforts to enforce a culture mindset- the  workshops and stuff that are held to promote it. I’m surprised that people in high places continue to be enamoured by this concept when they really should know better, having come up through the ranks themselves.

RC 

Yea, the question is whether they have come through the ranks themselves. A lot of them have come through MBA programmes or have been parachuted in. This is why, when I teach in the MBA, I try to teach this wider appreciation of culture because I know what the positivists are teaching – they are telling their students that culture is a good lever to get the kind of desirable behaviours that managers want.

KA 

Totally agree, the solution is to teach diverse perspectives instead of the standard positivist party line. I try to do the same in my MBA decision-making class – that is, I challenge the positivistic mindset by drawing students’ attention to the fact that in real life, problems are not given but have to be taken from complex situations (to paraphrase Russell Ackoff). Moreover, how one frames the problem determines the kind of answer one will get. Analytical decision-making tools assume the decision problem is given, but one is never given a problem in real life. So, I spend a lot of time teaching sensemaking approaches that can help students extract problems from complex situations by building context around the situation.

Anyway, we’ve been going for quite a bit, there’s one thing I absolutely must touch upon before we close this conversation – the use of irony in management. I know, your PhD work was around this concept, and it’s kind of an unusual take. I’m sure my readers would be very interested to hear more about your take on irony and why it’s useful in management.

RC 

I think we’ve set the stage quite nicely in terms of the cultural discussion. So what I was looking at in my PhD was a massive cultural change in an Australian company, a steelworks. We had unfettered access to the company for six and a half years, which is kind of unheard of. So anyway, one of the interesting things we noticed during our fieldwork was that everybody was identifying the same group of people as being the ones that were giving them the best information, were the easiest to talk, had the most  useful data sources, etc.

We then noticed that these people seemed to have an ironic sensibility. What does that mean? Well, they poked fun at themselves, their teammates, managers and the organisation…and indeed, even our research, but in very subtle ways. However, these people were also doing their work exceptionally well: they had more knowledge about what the hell was going on than anybody else in the company. Everybody liked them, everybody wanted to work with them, everybody was coming to them as problem solvers. You know, they had all of this interesting stuff happening around them.

So, what does it mean to have an ironic stance or an ironic sensibility in the midst of a shifting culture while doing quite complex work in challenging conditions? Well, there are three elements to it, firstly there’s there’s a perspective that you take, secondly there’s a performance that you give, and thirdly there’s a personality or character you develop.

The ironic perspective is that you see the gap between the rhetoric and reality, you see the gaps that most others do not. Then you’ve got this feeling that maybe it’s only you that sees the gap, and that can be quite scary. Especially if you’re trying to transmit that there’s a gap to powerful people who haven’t seen it,  and may even think everything’s going well.

How do you do this without losing your head?  And I mean that both literally (as in going crazy) and metaphorically as in losing your job.

That’s where the ironic performance comes in  – you say one thing while actually meaning something else. You’re trying to get people to deconstruct your message and work out where the gap is for themselves rather than confronting them with it and saying, “look, here is the gap”. So, this is where all the witticisms and the play on words and the humour come in. These are devices through which this message is transmitted in a way that helps the ironist keep her head – both metaphorically and in terms of her own sanity. These people are critical to the organisation because they call things out in a way that is acceptable. Moreover, since such people also tend to be good at what they do, they tend to have an outsized influence on their peers as well as on management.

So, our argument was that these folks with an ironic sensibility, they’re not just useful to have around they’re absolutely vital, and you should do everything you can to find them and look after them in the contemporary organisation.

KA 

So, there’s a clear distinction between a cynical and an ironic personality, because the cynic will call it out quite bluntly, in a way that puts people off. The ironists get away with it because they call it out in a very subtle way that could be even construed as not calling it out. It requires a certain skill and talent to do that.

RC 

Yes, and there’s a different emotional response as well. The cynic calls it out and hates it; the ironist expects it and takes joy in its absurdity.

KA 

So, the ironist is a bit like the court jester of yore: given licence to call out bullshit in palatable, even entertaining ways.

RC 

I like that. The original ironist was Socrates – pretending to be this bumbling fool but actually ridiculously sharp. The pretence is aimed at exposing an inconsistency in the others’ thinking, and to start a dialogue about it. That’s the role the ironist plays in achieving change.

KA 

That’s fascinating because it ties in with something I’ve noticed in my travels through various organisations. I do a lot of dialogic work with groups – trying to use conversations to frame different perspectives on complex situations. When doing so I’ve often found that the people with the most interesting things to say will have this ironic sensibility – they are able to call out bullshit using a memorable one-liner or gentle humour, in a way that doesn’t kill a conversation but actually encourages it.  There is this important dialogic element to irony.

RC 

It’s what they call the soft irony of Socrates – the witticisms and the elegance that keeps a difficult conversation going for long enough to surface different perspectives. The thing is you can keep going because in a complex situation there isn’t a single truth or just one right way of acting.

KA 

It gets to a possible way of acting. In complex situations there are multiple viable paths and the aim of dialogue is to open up different perspectives so that these different paths become apparent. I see that irony can be used to draw attention to these in a memorable way.  These ironists are revolutionaries of sorts, they have a gift of the gab, they’re charismatic, they are fun to talk to. People open up to them and engage with them, in contrast to cynics whose bitterness tends to shut down dialogue completely.

RC 

Yeah, and the conversation can continue even when the ironists depart. As an extreme example, Socrates chose to die in the final, ironic act of his life. Sure he was old and his time was coming anyway, but the way he chose to go highlighted the gap between principles and practice in Athens in an emphatic way. So emphatic that we talk about it now, millenia later.   

The roll call is long:  Socrates drank hemlock, Cicero was murdered, Voltaire was exiled, Oscar Wilde went to jail, Jonathan Swift was sent to a parish in the middle of Ireland – and so on. All were silenced so that they wouldn’t cause any more trouble. So there’s always a risk that however witty, however elegant your rhetoric, and however hard you try to keep these conversations going and get people to see the gap, there’s always a risk that a sword will be plunged into your abdomen.

KA 

The system will get you in the end, but the conversation will continue! I think that’s a great note on which to conclude our chat.  Thanks very much for your time, Richard.  I really enjoyed the conversation and learnt a few things, as I always do when chatting with you.

RC 

It’s been a pleasure, always wonderful to talk to you.

Written by K

March 29, 2021 at 7:35 pm

Learning, evolution and the future of work

leave a comment »

The Janus-headed rise of AI has prompted many discussions about the future of work.  Most, if not all, are about AI-driven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines.  This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automation-proof in the medium-term future.

This begs the question as to what we can do, as individuals, to prepare for a machine-centric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as  my consulting and teaching experiences.

Levels of learning

Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.

So what can one do to prepare for an ambiguous and essentially unpredictable future?

To answer this question, we need to delve into an important, yet oft-overlooked aspect of learning.

A key characteristic of learning is that it is driven by trial and error.  To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight.  Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.

In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:

If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.

Let’s unpack this claim by looking at his proposed classification:

Zero order learning –    Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning.  Such learning is hard-wired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.

First order learning (Learning I) –  Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The  canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.

A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning  cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetition-based process of trial and error.

As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.

Second order learning (Learning II) –  Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions.  Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.

Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.

Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”   Two  aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of  the situation.  Secondly,  once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.

Sensemaking and second order learning

In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail.   Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand.  The greater the diversity, the more complete the exploration of the problem space.

Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem.  As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.”  But this begs the question as to how these alternatives are to be determined.  The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives.  Surfacing these by asking the right questions is the task of sensemaking.

As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:

  • What kind of business are you in?
  • Who are your customers?
  • What’s the competitive landscape?
  • …and so on

Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.

It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context:  Kodak is a well-known example, there are many more.  So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together.  Such co-evolution is as true of natural systems as it is of social ones. As Bateson tells us:

…the evolution of the horse from Eohippus was not a one-sided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.

Indeed, one can think of evolution by natural selection as a process by which nature learns (in a second-order sense).

The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that second-order learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes.  In view of this it seems more practical for organisations to focus on developing what one might call  learning individuals – people who are capable of adapting to changes in their environment by continual learning

Learning to learn

Cliches aside, the modern workplace is characterised by rapid, technology-driven change. It is difficult for an  individual to keep up because one has to:

    • Figure out which changes are significant and therefore worth responding to.
    • Be capable of responding to them meaningfully.

The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence.  One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?

The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important second-order skill of learning to learn has fallen by the wayside.  In reality, though,  it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.

Back to the future of work

When industry complains about new graduates being ill-prepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and  Bateson’s notion of second order learning tells us why:

  • Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
  • Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn

In my experience,  it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my  consulting work.  Two examples may help illustrate how this works in  the classroom:

  • Many analytically-inclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
  • When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a two-step approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms.  The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful co-teacher, Alex, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.

In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:

Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.

The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.

Written by K

July 5, 2018 at 6:05 pm

The two tributaries of time

with 2 comments

How time flies. Ten years ago this month, I wrote my first post on Eight to Late.  The anniversary gives me an excuse to post something a little different. When rummaging around in my drafts folder for something suitable, I came across this piece that I wrote some years ago (2013) but didn’t publish.   It’s about our strange relationship with time, which I thought makes it a perfect piece to mark the occasion.

Introduction

The metaphor of time as a river resonates well with our subjective experiences of time.  Everyday phrases that evoke this metaphor include the flow of time and time going by, or the somewhat more poetic currents of time.  As Heraclitus said, no [person] can step into the same river twice – and so it is that a particular instant in time …like right now…is ephemeral, receding into the past as we become aware of it.

On the other hand, organisations have to capture and quantify time because things have to get done within fixed periods, the financial year being a common example. Hence, key organisational activities such as projects, strategies and budgets are invariably time-bound affairs. This can be problematic because there is a mismatch between the ways in which organisations view time and individuals experience it.

Organisational time

The idea that time is an objective entity is most clearly embodied in the notion of a timeline: a graphical representation of a time period, punctuated by events. The best known of these is perhaps the ubiquitous Gantt Chart, loved (and perhaps equally, reviled) by managers the world over.

Timelines are interesting because, as Elaine Yakura states in this paper, “they seem to render time, the ultimate abstraction, visible and concrete.”   As a result, they can serve as boundary objects that make it possible to negotiate and communicate what is to be accomplished in the specified time period. They make this possible because they tell a story with a clear beginning, middle and end, a narrative of what is to come and when.

For the reasons mentioned in the previous paragraph, timelines are often used to manage time-bound organisational initiatives. Through their use in scheduling and allocation, timelines serve to objectify time in such a way that it becomes a resource that can be measured and rationed, much like other resources such as money, labour etc.

At our workplaces we are governed by many overlapping timelines – workdays, budgeting cycles and project schedules being examples. From an individual perspective, each of these timelines are different representations of how one’s time is to be utilised, when an activity should be started and when it must be finished. Moreover, since we are generally committed to multiple timelines, we often find ourselves switching between them. They serve to remind us what we should be doing and when.

But there’s more: one of the key aims of developing a timeline is to enable all stakeholders to have a shared understanding of time as it pertains to the initiative. In this view, a timeline is a consensus representation of how a particular aspect of the future will unfold.  Timelines thus serve as coordinating mechanisms.

In terms of the metaphor, a timeline is akin to a map of the river of time. Along the map we can measure out and apportion it; we can even agree about way-stops at various points in time. However, we should always be aware that it remains a representation of time, for although we might treat a timeline as real, the fact is no one actually experiences time as it is depicted in a timeline. Mistaking one for the other is akin to confusing the map with the territory.

This may sound a little strange so I’ll try to clarify.  I’ll start with the observation that we experience time through events and processes – for example the successive chimes of a clock, the movement of the second hand of a watch (or the oscillations of a crystal), the passing of seasons or even the greying of one’s hair. Moreover, since these events and processes can be objectively agreed on by different observers, they can also be marked out on a timeline.  Yet the actual experience of living these events is unique to each individual.

Individual perception of time

As we have seen, organisations treat time as an objective commodity that can be represented, allocated and used much like any tangible resource.  On the other hand our experience of time is intensely personal.  For example, I’m sitting in a cafe as I write these lines. My perception of the flow of time depends rather crucially on my level of engagement in writing: slow when I’m struggling for words but zipping by when I’m deeply involved. This is familiar to us all: when we are deeply engaged in an activity, we lose all sense of time but when our involvement is superficial we are acutely aware of the clock.

This is true at work as well. When I’m engaged in any kind of activity at work, be it a group activity such as a meeting, or even an individual one such as developing a business case, my perception of time has little to do with the actual passage of seconds, minutes and hours on a clock. Sure, there are things that I will do habitually at a particular time – going to lunch, for example – but my perception of how fast the day goes is governed not by the clock but by the degree of engagement with my work.

I can only speak for myself, but I suspect that this is the case with most people. Though our work lives are supposedly governed by “objective” timelines, the way we actually live out our workdays depends on a host of things that have more to do with our inner lives than visible outer ones.  Specifically, they depend on things such as feelings, emotions, moods and motivations.

Flow and engagement

OK, so you may be wondering where I’m going with this. Surely, my subjective perception of my workday should not matter as long as I do what I’m required to do and meet my deadlines, right?

As a matter of fact, I think the answer to the above question is a qualified, “No”. The quality of the work we do depends on our level of commitment and engagement. Moreover, since a person’s perception of the passage of time depends rather sensitively on the degree of their involvement in a task, their subjective sense of time is a good indicator of their engagement in work.

In his book, Finding Flow, Mihalyi Csikszentmihalyi describes such engagement as an optimal experience in which a person is completely focused on the task at hand.  Most people would have experienced flow when engaged in activities that they really enjoy. As Anthony Reading states in his book, Hope and Despair: How Perceptions of the Future Shape Human Behaviour, “…most of what troubles us resides in our concerns about the past and our apprehensions about the future.”  People in flow are entirely focused on the present and are thus (temporarily) free from troubling thoughts. As Csikszentmihalyi puts it, for such people, “the sense of time is distorted; hours seem to pass by in minutes.”

All this may seem far removed from organisational concerns, but it is easy to see that it isn’t: a Google search on the phrase “increase employee engagement” will throw up many articles along the lines of “N ways to increase employee engagement.”  The sense in which the term is used in these articles is essentially the same as the one Csikszentmihalyi talks about: deep involvement in work.

So, the advice of management gurus and business school professors notwithstanding, the issue is less about employee engagement or motivation than about creating conditions that are conducive to flow.   All that is needed for the latter is a deep understanding how the particular organisation functions, the task at hand and (most importantly) the people who will be doing it.  The best managers I’ve worked with have grokked this, and were able to create the right conditions in a seemingly effortless and unobtrusive way. It is a skill that cannot be taught, but can be learnt by observing how such managers do what they do.

Time regained

Organisations tend to treat their employees’ time as though it were a commodity or resource that can be apportioned and allocated for various tasks. This view of time is epitomised by the timeline as depicted in a Gantt Chart or a resource-loaded project schedule.

In contrast, at an individual level, the perception of time depends rather critically on the level of engagement that a person feels with the task he or she is performing. Ideally organisations would (or ought to!) want their employees to be in that optimal zone of engagement that Csikszentmihalyi calls flow, at least when they are involved in creative work. However, like spontaneity, flow is a state that cannot be achieved by corporate decree; the best an organisation can do is to create the conditions that encourage it.

The organisational focus on timelines ought to be balanced by actions that are aimed at creating the conditions that are conducive to employee engagement and flow.  It may then be possible for those who work in organisation-land to experience, if only fleetingly, that Blakean state in which eternity is held in an hour.

Written by K

September 20, 2017 at 9:17 pm

%d bloggers like this: