Archive for January 2015
Much of the work that goes on in organisations is done by groups of people who work together in order to achieve shared objectives. Given this, it is no surprise that researchers have expended a great deal of effort in building theories about how teams work. However, as Richard Hackman noted in this paper, more than 70 years of research (of ever-increasing sophistication) has not resulted in a true understanding of the factors that give rise to high-performing teams. The main reason for this failure is that:
“…groups are social systems. They redefine objective reality, they create new realities (both for their members and in their system contexts), and they evolve their own purposes and strategies for pursuing those purposes. Groups are not mere assemblies of multiple cause–effect relationships; instead, they exhibit emergent and dynamic properties that are not well captured by standard causal models.”
Hackman had a particular interest in leadership as a causal factor in team performance. One of the things he established is that leadership matters a whole lot less than is believed…or, more correctly, it matters for reasons that are not immediately obvious. As he noted:
“…60 per cent of the difference in how well a group eventually does is determined by the quality of the condition-setting pre-work the leader does. 30 per cent is determined by how the initial launch of the group goes. And only 10 per cent is determined by what the leader does after the group is already underway with its work. This view stands in stark contrast to popular images of group leadership—the conductor waving a baton throughout a musical performance or an athletic coach shouting instructions from the sidelines during a game.”
Although the numbers quoted above can be contested, the fact is that as far as team performance is concerned, conditions matter more than the quality of leadership. In this post, I draw on Hackman’s paper as well as my work (done in collaboration with Paul Culmsee) to argue that the real work of leaders is not to lead (in the conventional sense of the word) but to create the conditions in which teams can thrive.
The fundamental attribution error
Poor performance of teams is often attributed to a failure of leadership. A common example of this is when the coach of a sporting team is fired after a below par season. On the flip side, CxOs can earn big-buck dollar bonuses when their companies make or exceed their financial targets because they are seen as being directly responsible for the result.
Attributing the blame or credit for the failure or success of a team to a specific individual is called the leadership attribution error. Hackman suggested that this error is a manifestation of a human tendency to assign greater causal priority to factors that are more visible than those that are not: leaders tend to be in the limelight more than their teams and are therefore seen as being responsible for their teams’ successes and failures.
This leader-as-hero (or villain!) perspective has fueled major research efforts aimed at pinning down those elusive leadership skills and qualities that can magically transform teams into super-performing ensembles. This has been accompanied by a burgeoning industry of executive training programmes to impart these “scientifically proven” skills to masses of managers. These programmes, often clothed in the doublespeak of organisation culture, are but subtle methods of control that serve to establish directive approaches to leadership. Such methods rarely (if ever) result in high-performing organisations or teams.
An alternate approach to understanding team performance
The failure to find direct causal relationships between such factors and team performance led Hackman to propose a perspective that focuses on structural conditions instead. The basic idea in this alternate approach is to focus on the organisational and social conditions that enable the team to perform well.
This notion of conditions over causes is relevant in other related areas too. Here are a couple of examples:
- Innovation: Most attempts to foster innovation focus on exhorting people to be creative and/or instituting innovation training programmes (causal approach). Such approaches usually result in innovation of an incremental kind at best. Instead, establishing a low pressure environment that enables people to think for themselves and follow-up on their ideas without fear of failure generally meets with more success (structural approach).
- Collaboration: Organisations generally recognise the importance of collaboration. Yet, they attempt to foster in the worst possible way: via the establishment of cross-functional teams without clear mandates or goals and/or forced team-building exercises that have the opposite effect to the one intended (causal approach). The alternate approach is to simplify reporting lines, encourage open communication across departments and generally make it easy for people from different specialisations to work together in informal groups (structural approach). A particularly vexing intra-departmental separation that I have come across recently is the artificial division of responsibilities between information systems development and delivery. Such a separation results in reduced collaboration and increased finger pointing.
That said, let’s take a look at Hackman’s advice on how to create an environment conducive to teamwork. Hackman identified the following five conditions that tend to correlate well with improved team performance:
- The group must be a real team– i.e. it must have clear boundaries (clarity as to who is a member and who isn’t), interdependence (the performance of every individual in the team must in some way depend on others in the team) and stability (membership of the team should be stable over time).
- Compelling direction– the team must have a goal that is clear and worth pursuing. Moreover, and this is important, the team must be allowed to determine how the goal is to be achieved – the end should be prescribed, not the means.
- The structure must enable teamwork– The team should be structured in a way that allows members to work together. This consists of a couple of factors: 1) The team must be of the right size – as small and diverse as possible (large, homogenous teams are found to be ineffective), and 2) There must be clear norms of conduct. Note that Hackman lists these two as separate points in his paper.
- Supportive organizational context– the team must have the organisational resources that enable it to carry out its work. For example, access to the information needed for the team to carry out its work and access to technical and subject matter experts. In addition, there should be a transparent reward system that provides recognition for good work.
- Coaching– the team must have access to a mentor or coach who understands and has the confidence of the team. Apart from helping team members tide over difficult situations, a good coach should be able to help them navigate organizational politics and identify emerging threats and opportunities that may not be obvious to them.
To reiterate, these are structural rather than causal factors in that they do not enhance team performance directly. Instead, when present, they tend to encourage behaviours that enhance team performance and suppress those that don’t.
Another interesting point is that some of these factors are more important than others. For example, Ruth Wageman found that team design (the constitution and structure of the team) is about four times more important than coaching in affecting the team’s ability to manage itself and forty times as powerful in affecting team performance (see this paper for details). Although the numbers should not be taken at face value, Wageman’s claim reiterates the main theme of this article: that structural factors matter more than causal ones.
The notion of a holding environment
One of the things I noticed when I first read Hackman’s approach is that it has some similarities to the one that Paul and I advocated in our book, The Heretic’s Guide to Best Practices.
The Heretic’s Guide is largely about collaborative approaches to managing (as opposed to solving!) complex problems in organisations. Our claim is that the most intractable problems in organisations are consequences of social rather than technical issues. For example, the problem of determining the “right” strategy for an organisation cannot be settled on objective grounds because the individuals involved will have diverse opinions on what the organisation’s focus should be. The process of arriving at a consensual strategy is, therefore, more a matter of dealing with this diversity than reaching an objectively right outcome. In other words, it is largely about achieving a common view of what the strategy should be and then building a shared commitment to executing it.
The key point is that there is no set process for achieving a shared understanding of a problem. Rather, one needs to have the right environment (structure!) in which contentious issues can be discussed openly without fear. In our book we used the term holding environment to describe a safe space in which such open dialogue can take place.
The theory of communicative rationality formulated by the German philosopher, Juergen Habermas, outlines the norms that operate within a holding environment. It would be too long a detour to discuss Habermas’ work in any detail – see this paper or chapter 7 of our book to find out more. What is important to note is that an ideal holding environment has the following norms:
- Power neutrality
Problem is, some of these are easier to achieve than others. Inclusion, autonomy and power neutrality can be encouraged by putting in place appropriate organisational structures and rules. Empathy and transparency, however, are typically up to the individual. Nevertheless, conditions that enable the former will also encourage (though not guarantee) the latter.
In our book we discuss how such a holding environment can be approximated in multi-organisational settings such as large projects. It would take me too far afield to get into specifics of the approach here. The point I wish to make, however, is that the notion of a holding environment is in line with Hackman’s thoughts on the importance of environmental or structural factors.
Some will argue that this article merely sets up and tears down a straw man, and that modern managers are well aware of the pitfalls of a directive approach to leading teams. Granted, much has been written about the importance of setting the right conditions (such as autonomy)…and it is possible that many managers are aware of it too. The point I would make is that this awareness, if it exists at all, has not been translated into action often enough. As a result, the gap between the rhetoric and reality of leadership remains as wide as ever – managers talk the talk of leadership, but do not walk it.
Perhaps this is because many (most?) managers are reluctant let go the reins of control when they know they will be held responsible if things were to go belly-up. The few who manage to overcome their fears know that it requires the ability to trust others, as well as the courage and integrity to absorb the blame when things go wrong (as they inevitably will from time to time). These all too rare qualities are essential for the approach described here to truly take root and flourish. In conclusion, I think it is fair to say that the biggest challenges associated with building high-performance teams are ethical rather than technical ones.
Introduction – some truths about data modelling
It has been said that data is the lifeblood of business. The aptness of this metaphor became apparent to when I was engaged in a data mapping project some years ago. The objective of that effort was to document all the data flows within the organization. The final map showed very clearly that the volume of data on the move was good indicator of the activity of the function: the greater the volume, the more active the function. This is akin to the case of the human body wherein organs that expend the more energy tend to have a richer network of blood vessels.
Although the above analogy is far from perfect, it serves to highlight the simple fact that most business activities involve the movement and /or processing of data. Indeed, the key function of information systems that support business activities is to operate on and transfer data. It therefore matters a lot as to how data is represented and stored. This is the main concern of the discipline of data modelling.
The mainstream approach to data modelling assumes that real world objects and relationships can be accurately represented by models. As an example, a data model representing a sales process might consist of entities such as customers and products and their relationships, such as sales (customer X purchases product Y). It is tacitly assumed that objective, bias-free models of entities and relationships of interest can be built by asking the right questions and using appropriate information collection techniques.
However, things are not quite so straightforward: as professional data modellers know, real-world data models are invariably tainted by compromises between rigour and reality. This is inevitable because the process of building a data model involves at least two different sets of stakeholders whose interests are often at odds – namely, business users and data modelling professionals. The former are less interested in the purity of model than the business process that it is intended to support; the interests of the latter, however, are often the opposite.
This reveals a truth about data modelling that is not fully appreciated by practitioners: that it is a process of negotiation rather than a search for a true representation of business reality. In other words, it is a socio-technical problem that has wicked elements. As such then, data modelling ought to be based on the principles of emergent design. In this post I explore this idea drawing on a brilliant paper by Heinz Klein and Kalle Lyytinen entitled, Towards a New Understanding of Data Modelling as well as my own thoughts on the subject.
Klein and Lyytinen begin their paper by asking four questions that are aimed at uncovering the tacit assumptions underlying the different approaches to data modelling. The questions are:
- What is being modelled? This question delves into the nature of the “universe” that a data model is intended to represent.
- How well is the result represented? This question asks if the language, notations and symbols used to represent the results are fit for purpose – i.e. whether the language and constructs used are capable of modelling the domain.
- Is the result valid? This asks the question as to whether the model is a correct representation of the domain that is being modelled.
- What is the social context in which the discipline operates? This question is aimed at eliciting the views of different stakeholders regarding the model: how they will use it, whether their interests are taken into account and whether they benefit or lose from it.
It should be noted that these questions are general in that they can be used to enquire into any discipline. In the next section we use these questions to uncover the tacit assumptions underlying the mainstream view of data modelling. Following that, we propose an alternate set of assumptions that address a major gap in the mainstream view.
Deconstructing the mainstream view
What is being modelled?
As Klein and Lyytinen put it, the mainstream approach to data modelling assumes that the world is given and made up of concrete objects which have natural properties and are associated with [related to] other objects. This assumption is rooted in a belief that it is possible to build an objectively true picture of the world around us. This is pretty much how truth is perceived in data modelling: data/information is true or valid if it describes something – a customer, an order or whatever – as it actually is.
In philosophy, such a belief is formalized in the correspondence theory of truth, a term that refers to a family of theories that trace their origins back to antiquity. According to Wikipedia:
Correspondence theories claim that true beliefs and true statements correspond to the actual state of affairs. This type of theory attempts to posit a relationship between thoughts or statements on one hand, and things or facts on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined solely by how it relates to a reality; that is, by whether it accurately describes that reality.
In short: the mainstream view of data modelling is based on the belief that the things being modelled have an objective existence.
How well is the result represented?
If data models are to represent reality (as it actually is), then one also needs an appropriate means to express that reality in its entirety. In other words, data models must be complete and consistent in that they represent the entire domain and do not contain any contradictory elements. Although this level of completeness and logical rigour is impossible in practice, much research effort is expended in finding evermore complete and logical consistent notations.
Practitioners have little patience with cumbersome notations invented by theorists, so it is no surprise that the most popular modelling notation is the simplest one: the entity-relationship (ER) approach which was first proposed by Peter Chen. The ER approach assumes that the world can be represented by entities (such as customer) with attributes (such as name), and that entities can be related to each other (for example, a customer might be located at an address – here “is located at” is a relationship between the customer and address entities). Most commercial data modelling tools support this notation (and its extensions) in one form or another.
To summarise: despite the fact that the most widely used modelling notation is not based on rigorous theory, practitioners generally assume that the ER notation is an appropriate vehicle to represent what is going on in the domain of interest.
Is the result valid?
As argued above, the mainstream approach to data modelling assumes that the world of interest has an objective existence and can be represented by a simple notation that depicts entities of interest and the relationships between them. This leads to the question of the validity of the models thus built. To answer this we have to understand how data models are constructed.
The process of model-building involves observation, information gathering and analysis – that is, it is akin to the approach used in scientific enquiry. A great deal of attention is paid to model verification, and this is usually done via interaction with subject matter experts, users and business analysts. To be sure, the initial model is generally incomplete, but it is assumed that it can be iteratively refined to incorporate newly surfaced facts and fix errors. The underlying belief is that such a process gets ever-closer to the truth.
In short: it is assumed that it an ER model built using a systematic and iterative process of enquiry will result in a model that is a valid representation of the domain of interest.
What is the social context in which the discipline operates?
From the above, one might get the impression that data modelling involves a lot of user interaction. Although this is generally true, it is important to note that the users’ roles are restricted to providing information to data modellers. The modellers then interpret the information provided by users and cast into a model.
This brings up an important socio-political implication of the mainstream approach: data models generally support business applications that are aimed at maintaining and enhancing managerial control through automation and / or improved oversight. Underlying this is the belief that a properly constructed data model (i.e. one that accurately represents reality) can enhance business efficiency and effectiveness within the domain represented by the model.
In brief: data models are built to further the interests of specific stakeholder groups within an organization.
Summarising the mainstream view
The detailed responses to the questions above reveal that the discipline of data modelling is based on the following assumptions:
- The domain of interest has an objective existence.
- The domain can be represented using a (more or less) logical language.
- The language can represent the domain of interest accurately.
- The resulting model is based largely on a philosophy of managerial control, and can be used to drive organizational efficiency and effectiveness.
Many (most?) professional data management professionals will see these assumptions as being uncontroversial. However, as we shall see next, things are not quite so simple…
Motivating an alternate view of data modelling
In an earlier section I mentioned the correspondence theory of truth which tells us that true statements are those that correspond to the actual state of affairs in the real world. A problem with correspondence theories is that they assume that: a) there is an objective reality, and b) that it is perceived in the same way by everyone. This assumption is problematic, especially for issues that have a social dimension. Such issues are perceived differently by different stakeholders, each of who will seek data that supports their point of view. The problem is that there is no way to determine which data is “objectively right.” More to the point, in such situations the very notion of “objective rightness” can be legitimately questioned.
Another issue with correspondence theories is that a piece of data can at best be an abstraction of a real-world object or event. This is a serious problem with correspondence theories in the context of business intelligence. For example, when a sales rep records a customer call, he or she notes down only what is required by the CRM system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed.
Another perspective on truth is offered by the so called consensus theory which asserts that true statement are those that are agreed to by the relevant group of people. This is often the way “truth” is established in organisations. For example, managers may choose to calculate KPIs using certain pieces of data that are deemed to be true. The problem with this is that consensus can be achieved by means that are not necessarily democratic .For example, a KPI definition chosen by a manager may be contested by an employee. Nevertheless, the employee has to accept it because organisations are not democracies. A more significant issue is that the notion of “relevant group” is problematic because there is no clear criterion by which to define relevance. Quite naturally this leads to conflict and ill-will.
This conclusion leads one to formulate alternative answers to four questions posed above, thus paving the way to a new approach to data modelling.
An alternate view of data management
What is being modelled?
The discussion of the previous section suggests that data models cannot represent an objective reality because there is never any guarantee that all interested parties will agree on what that reality is. Indeed, insofar as data models are concerned, it is more useful to view reality as being socially constructed – i.e. collectively built by all those who have a stake in it.
How is reality socially constructed? Basically it is through a process of communication in which individuals discuss their own viewpoints and agree on how differences (if any) can be reconciled. The authors note that:
…the design of an information system is not a question of fitness for an organizational reality that can be modelled beforehand, but a question of fitness for use in the construction of a [collective] organizational reality…
This is more in line with the consensus theory of truth than the correspondence theory.
In brief: the reality that data models are required to represent is socially constructed.
How well is the result represented?
Given the above, it is clear that any data model ought to be subjected to validation by all stakeholders so that they can check that it actually does represent their viewpoint. This can be difficult to achieve because most stakeholders do not have the time or inclination to validate data models in detail.
In view of the above, it is clear that the ER notation and others of its ilk can represent a truth rather than the truth – that is, they are capable of representing the world according to a particular set of stakeholders (managers or users, for example). Indeed, a data model (in whatever notation) can be thought of one possible representation of a domain. The point is, there are as many representations possible as there are stakeholder groups and in mainstream data modelling, and one of these representations “wins” while the others are completely ignored. Indeed, the alternate views generally remain undocumented so they are invariably forgotten. This suggests that a key step in data modelling would be to capture all possible viewpoints on the domain of interest in a way that makes a sensible comparison possible. Apart from helping the group reach a consensus, such documentation is invaluable to explain to future users and designers why the data model is the way it is. Regular readers of this blog will no doubt see that the IBIS notation and dialogue mapping could be hugely helpful in this process. It would take me too far afield to explore this point here, but I will do so in a future post.
In brief: notations used by mainstream data modellers cannot capture multiple worldviews in a systematic and useful way. These conventional data modelling languages need to be augmented by notations that are capable of incorporating diverse viewpoints.
Is the result valid?
The above discussion begs a question, though: what if two stakeholders disagree on a particular point?
When we participate in discussions we want our views to be taken seriously. Consequently, we present our views through statements that we hope others will see as being rational – i.e. based on sound premises and logical thought. One presumes that when someone makes claim, he or she is willing to present arguments that will convince others of the reasonableness of the claim. Others will judge the claim based the validity of the statements claimant makes. When the validity claims are contested, debate ensues with the aim of getting to some kind of agreement.
The philosophy underlying such a process of discourse (which is simply another word for “debate” or “dialogue”) is described in the theory of communicative rationality proposed by the German philosopher Jurgen Habermas. The basic premise of communicative rationality is that rationality (or reason) is tied to social interactions and dialogue. In other words, the exercise of reason can occur only through dialogue. Such communication, or mutual deliberation, ought to result in a general agreement about the issues under discussion. Only once such agreement is achieved can there be a consensus on actions that need to be taken. Habermas refers to the latter as communicative action, i.e. action resulting from collective deliberation…
In brief: validity is not an objective matter but a subjective – or rather an intersubjective one that is, validity has to be agreed between all parties affected by the claim.
What is the social context in which the discipline operates?
From the above it should be clear that the alternate view of data management is radically different from the mainstream approach. The difference is particularly apparent when one looks at the way in which the alternate approach views different stakeholder groups. Recall that in the mainstream view, managerial perspectives take precedence over all others because the overriding aim of data modelling (as indeed most enterprise IT activities) is control. Yes, I am aware that it is supposed to be about enablement, but the question is enablement for whom? In most cases, the answer is that it enables managers to control. In contrast, from the above we see that the reality depicted in a data (or any other) model is socially constructed – that is, it is based on a consensus arising from debates on the spectrum of views that people hold. Moreover, no claim has precedence over others on virtue of authority. Different interpretations of the world have to be fused together in order to build a consensually accepted world.
The social aspect is further muddied by conflicts between managers on matters of data ownership, interpretation and access. Typically, however, such matters lie outside the purview of data modellers
In brief: the social context in which the discipline operates is that there are a wide variety of stakeholder groups, each of which may hold different views. These must be debated and reconciled.
Summarising the alternate view
The detailed responses to the questions above reveal that the alternate view of data modelling is based on the following assumptions:
- The domain of interest is socially constructed.
- The standard representations of data models are inadequate because they cannot represent multiple viewpoints. They can (and should) be supplemented by notations that can document multiple viewpoints.
- A valid data model is constructed through an iterative synthesis of multiple viewpoints.
- The resulting model is based on a shared understanding of the socially constructed domain.
Clearly these assumptions are diametrically opposed to those in the mainstream. Let’s briefly discuss their implications for the profession
The most important implication of the alternate view is that a data model is but one interpretation of reality. As such, there are many possible interpretations of reality and the “correctness” of any particular model hinges not on some objective truth but on an agreed, best-for-group interpretation. A consequence of the above is that well-constructed data models “fuse” or “bring together” at least two different interpretations – those of users and modellers. Typically there are many different groups of users, each with their own interpretation. This being the case, it is clear that the onus lies on modellers to reconcile any differences between these groups as they are the ones responsible for creating models.
A further implication of the above is that it is impossible to build a consistent enterprise-wide data model. That said, it is possible to have a high-level strategic data model that consists, say, of entities but lacks detailed attribute-level information. Such a model can be useful because it provides a starting point for a dialogue between user groups and also serves to remind modellers of the entities they may need to consider when building a detailed data model.
The mainstream view asserts that data is gathered to establish the truth. The alternate view, however, makes us aware that data models are built in such a way as to support particular agendas. Moreover, since the people who use the data are not those who collect or record it, a gap between assumed and actual meaning is inevitable. Once again this emphasises that fact that the meaning of a particular piece of data is very much in the eye of the beholder.
The mainstream approach to data modelling reflects the general belief that the methods of natural sciences can be successfully applied in the area of systems development. Although this is a good assumption for theoretical computer science, which deals with constructs such as data structures and algorithms, it is highly questionable when it comes to systems development in large organisations. In the latter case social factors dominate, and these tend to escape any logical system. This simple fact remains under-appreciated by the data modelling community and, for that matter, much of the universe of corporate IT.
The alternate view described in this post draws attention to the social and political aspects of data modelling. Although IT vendors and executives tend to give these issues short shrift, the chronically high failure rate of data-centric initiatives (such as those aimed at building enterprise data warehouses) warns us to pause and think. If there is anything at all to be learnt from these failures, it is that data modelling is not a science that deals with facts and models, but an art that has more to do with negotiation and law-making.