Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘IT Management’ Category

The law of requisite variety and its implications for enterprise IT

with 3 comments

Introduction

There are two  facets to the operation of IT systems and processes in organisations:  governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.

An example might help clarify the difference:

The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project.  The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.

Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek.  It also happens to be the root word of Cybernetics  which is the science of regulation or control.   In this post,  I  apply a key principle of cybernetics to a couple of areas of enterprise IT.

Cybernetic systems

An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment.  Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.

Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.).   In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.

The law of requisite variety

It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of  disturbances it might experience.

According to this paper, the law of requisite variety asserts that:

The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.

Mathematically:

V(E) > V(D) – V(R) – K

Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:

V(E) represents the set of  desired outcomes for the controlled environmental variable:  desired temperature range in the case of the thermostat,  successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.

V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)

V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)

K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.

I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more.

Implications for enterprise IT

In plain English, the law of requisite variety states that only “variety can absorb variety.”  As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:

…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.

This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.

In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of  IT systems and/or processes.

The main rationale behind standardizing an IT  process is to handle all possible demands (or use cases) via a small number of predefined responses.   When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.

Requisite variety and the service desk

A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.

The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the  V(R)  term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed.  The assumption here is that, between them, the three support tiers should be able to handle majority of calls.

Slack  (the K term) relates to unexploited capacity.  Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it.  Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible.  One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way  is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.

Service desk standardization is thus relatively straightforward to achieve provided:

  • The kinds of calls that come in are largely predictable.
  • The work can be routinized.
  • All non-routine work – such as an application enhancement request or a demand for a new system-  is  dealt with outside the system via (say) a change management process.

All this will be quite unsurprising and obvious to folks working in corporate IT. Now  let’s see what happens when we apply the law to a more complex system.

Requisite variety and the PMO

Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution.   PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve.  Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:

  • “50% of project management offices close within 3 years (Association for Project Mgmt)”
  • “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
  • “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
  • “68% of stakeholders perceive their PMOs to be bureaucratic     (2013 Gartner PPM Summit)”
  • “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”

The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic.  I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.

There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.

Why?

The answer, as you might suspect, has to do with variety.  Projects and service requests are very different beasts. Among other things, they differ in:

  • Duration: A project typically goes over many months whereas a service request has a lifetime of days,
  • Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated.  A service request typically consists one (or a small number) of well-defined tasks.
  • Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.

It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks  but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.

The key point is that the increased variety in the case of the PMO precludes standardisation.  As the law of requisite variety tells us, there are two ways to deal with variety:  regulate it  or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects.  In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.

Wrapping up

The law of requisite variety is a general principle that applies to any regulated system.  In this post I applied the law to two areas of enterprise IT – service management and project governance – and  discussed why standardization works well  for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both.  The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.

Written by K

December 12, 2016 at 9:00 pm

The hidden costs of IT outsourcing

with 2 comments

Many outsourcing arrangements fail because customers do not factor in hidden costs. In 2009, I wrote a post on these hard-to-quantify transaction costs. The following short video (4 mins 45 secs) summarises the main points of that post in a (hopefully!) easy-to-understand way:

Note: Here’s the full script, for those who prefer to read instead of watching…

One of the questions that organisations grapple with is whether or not to outsource IT work to external vendors. The work of Oliver Williamson  a Nobel Laureate in Economics – provides some insight into this issue.  This video is a brief look at how Williamson’s work on transaction cost economics can be applied to the question of outsourcing IT development or implementation.

A firm has two choices for any economic activity: it can either perform the activity in-house or go to market. In either case, the cost of the activity can be decomposed into production costs, which are direct and indirect costs of producing the good or service, and transaction costs, which are costs associated with making the economic exchange (more on this in a minute).

In the case of in-house IT work production costs include salaries, equipment costs etc whereas transaction costs include costs relating to building an IT team (with the right skills, attitude and knowledge).

In the case of outsourced IT work, production costs are similar to those in the in-house case – except that they are now incurred by the vendor and passed on to the client.  The point is, these costs are generally known upfront.

The transaction costs, however, are significantly different. They include things such as:

  1. Search costs: cost of searching for a suitable vendor
  2. Bargaining costs: effort incurred in agreeing on an acceptable price.
  3. Enforcement costs: costs of ensuring compliance with the contract
  4. Costs of coordinating work : this includes costs of managing the vendor.
  5. Cost of uncertainty: cost associated with unforeseen changes (scope change is a common example)

Now, there are a couple of things to note about transaction costs for outsourcing arrangements:

Firstly, they are typically the client’s problem, not the vendors. Secondly, they can be very hard to figure out upfront. They are the therefore the hidden costs of outsourcing.

According to Williamson, the decision as to whether or not an economic activity should be outsourced depends critically on these hidden transaction costs. In his words, “The most efficient institutional arrangement for carrying out a particular economic activity would be the one that minimized transaction costs.”

The most efficient institutional arrangement for IT development work is often the market, but in-house arrangements are sometimes better.

The potentially million dollar question is: when are in-house arrangements better?

Williamson’s work provides an answer to this question. He argues that the cost of completing an economic transaction in an open market depends on two factors

  1. Complexity of the transaction – for example, implementing an ERP system is more complex than implementing a new email system.
  2. Asset specificity – this refers to the degree of customization of the service or product. Highly customized services or products are worth more to the two parties than to anyone else. For example, custom IT services, tailored to the requirements of a specific company have more value client and provider than to anyone else.

In essence, the transaction costs increase with complexity and degree of customization. From this we can conclude that in-house arrangements may be better for work that is complex or highly customized.  The reason for this is simple: it is difficult to specify such systems in detail upfront. Consequently, contracts for such work tend to be complex…and worse, they invariably leave out important details.

Such contracts will work only if interpreted in a farsighted manner, with disputes being settled directly between the vendor and client instead of resorting to litigation.  When this becomes too hard to do, it makes sense to carry out the activity in-house. Note that this does not mean that it has to be done by internal staff…one can still hire contractors, but it is important ensure that they remain under internal supervision.

If one chooses to outsource such work it is important to ensure that the contract is as unambiguous and transparent as possible.  Moreover, both the client and the vendor should expect omissions in contracts, and be flexible whenever there are disagreements over the interpretation of contract terms. In this end, this is possible only if there is a trust-based relationship between the client and vendor…and trust, of course, is impossible to contractualise.

To sum up: be wary of outsourcing work that is complex or highly customized…and if you must, be sure to go with a vendor you trust.

Written by K

May 3, 2016 at 4:59 pm

Evolution, obsolescence and enterprise architecture

with 2 comments

Introduction

Enterprise architects are seldom (never?) given a blank canvas on which they can draw as they please. They invariably have to begin with an installed base of systems over which they have no control.  As I wrote in a piece on the legacy of legacy systems:

An often unstated (but implicit) requirement [on new systems] is that [they] must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems.

Indeed the system landscape of any large organization is a palimpsest, always retaining traces of what came before.  Those who actually maintain systems  – usually not architects – are painfully aware of this simple truth.

The IT landscape of an organization is therefore a snapshot, a picture that begins to age the instant is taken. Practicing enterprise architects will say they know this “of course”, and pay due homage to it in their words…but often not their actions.  The conflicts and contradictions between legacy and their aspirational architectures are hard to deal with and hence easier to ignore. In this post, I draw a parallel between this central conundrum of enterprise architecture and the process of biological evolution.

A Batesonian perspective on evolution

I’ve recently been re-reading Mind and Nature: A Necessary Unity, a book that Gregory Bateson wrote towards the end of his life of eclectic scholarship. Tucked away in the appendix of the book is an essay lamenting the fragmentation of knowledge and the lack of transdisciplinary thinking within universities.  Central to the essay is the notion of obsolescence. Bateson argued that much of what was taught in universities lagged behind the practical skills and mindsets that were needed to tackle the problems of that time.  Most people would agree that this is as true today as it was in Bateson’s time, perhaps even more so.

Bateson had a very specific idea of obsolescence in mind. He suggested that the educational system is lopsided because it invariably lags behind what is needed in the “real world”. Specifically, there is a lag between the typical university curriculum and the attitudes, dispositions, knowledge and skills needed to the problems of an ever-changing world. This lag is what Bateson referred to as obsolescence. Indeed, if the external world did not change there would be no lag and hence no obsolescence. As he noted:

I therefore propose to analyze the lopsided process called “obsolescence” which we might more precisely call “one-sided progress.” Clearly for obsolescence to occur there must be, in other parts of the system, other changes compared with which the obsolete is somehow lagging or left behind. In a static system, there would be no obsolescence…

This notion of obsolescence-as-lag has a direct connection with the contrasting process of developmental and evolutionary biology. The process of development of an embryo is inherently conservative – it develops according predetermined rules and is relatively robust to external stimuli. On the other hand, after birth, individuals are continually subject to a wide range of external factors (e.g. climate, stress etc.) that are unpredictable. If exposed to such factors over an extended period, they may change their characteristics in response to them (e.g. the tanning effect of sunlight, adaptability etc).  However, these characteristics are not inheritable.  They are passed on (if at all) by a much slower process of natural selection.  As a consequence, there is a significant lag between external stimuli and the inheritability of the associated characteristics.

As Bateson puts it:

Survival depends upon two contrasting phenomena or processes, two ways of achieving adaptive action. Evolution must always, Janus-like, face in two directions: inward towards the developmental regularities and physiology of the living creature and outward towards the vagaries and demands of the environment. These two necessary components of life contrast in interesting ways: the inner development-the embryology or “epigenesis”-is conservative and demands that every new thing shall conform or be compatible with the regularities of the status quo ante. If we think of a natural selection of new features of anatomy or physiology-then it is clear that one side of this selection process will favor those new items which do not upset the old apple cart. This is minimal necessary conservatism.

In contrast, the outside world is perpetually changing and becoming ready to receive creatures which have undergone change, almost insisting upon change. No animal or plant can ever be “readymade.” The internal recipe insists upon compatibility but is never sufficient for the development and life of the organism. Always the creature itself must achieve change of its own body. It must acquire certain somatic characteristics by use, by disuse, by habit, by hardship, and by nurture. These “acquired characteristics” must, however, never be passed on to the offspring. They must not be directly incorporated into the DNA. In organisational terms, the injunction – e.g. to make babies with strong shoulders who will work better in coal mines- must be transmitted through channels, and the channel in this case is via natural external selection of those offspring who happen (thanks to the random shuffling of genes and random creation of mutations) to have a greater propensity for developing stronger shoulders under the stress of working in coal mine.

The upshot of the above is that the genetic code of any species is inherently obsolete because it is, in at least a few ways, maladapted to its environment.  This is a good thing. Sustainable and lasting change to the genome of a population should occur only through the trial-and-error process of natural selection over many generations. It is only through such a gradual process that one can be sure that that a) the adaptation is necessary and b) that it occurs with minimal disruption to the existing order.

…and so to enterprise architecture

In essence, the aim of enterprise architecture is to come up with a strategy and plan to move from an old system landscape to a new one. Typically, architectures are proposed based on current technology trends and extrapolations thereof. Frameworks such as The Open Group Architecture Framework (TOGAF) present a range of options for migrating from legacy architecture.

Here’s an excerpt from Chapter 13 of the TOGAF Guide:

[The objective is to] create an overall Implementation and Migration Strategy that will guide the implementation of the Target Architecture, and structure any Transition Architectures. The first activity is to determine an overall strategic approach to implementing the solutions and/or exploiting opportunities. There are three basic approaches as follows:

  • Greenfield: A completely new implementation.
  • Revolutionary: A radical change (i.e., switches on, switch off).
  • Evolutionary: A strategy of convergence, such as parallel running or a phased approach to introduce new capabilities.

What can we say about these options in light of the discussion of the previous sections?

Firstly, from the discussion of the introduction, it is clear that Greenfield situations can be discounted on grounds rarity alone.  So let’s look at the second option – revolutionary change – and ask if it is viable in light of the discussion of the previous section.

In the case of a particular organization, the gap between an old architecture and technology trends/extrapolations is analogous to the lag between inherited characteristics and external forces. The former resist change; the latter insist on it.  The discussion of the previous section tells us that the former cannot be wished away, they are a natural consequence of “technology genes” embedded in the organization. Because this is so, changes are best introduced in a gradual way that permits adaptation through the slow and painful process of trial and error. This is why the revolutionary approach usually fails.

It follows from the above that the only viable approach to enterprise architecture is an evolutionary one. This process is necessarily gradual. Architects may wish for green fields and revolutions, but the reality is that lasting and sustainable change in an organisation’s technology landscape can only be achieved incrementally, akin to the way in which an aspiring marathon runner’s physiology adapts to the extreme demands of the sport.

The other, perhaps more subtle point made by this analogy is that a particular organization is but one member of a “species” which, in the present context, is a population of organisations that have a certain technology landscape. Clearly, a new style of architecture will be deemed a success only if it is adopted successfully by a significant number of organisations within this population. Equally clear is that this eventuality is improbable because new architectural paradigms are akin to random mutations. Most of these are rightly rejected by organizations, but only after exacting a high price. This explains why most technology fads tend to fade away.

Some consequences

The analogy between the evolution of biological systems and organizational technology landscapes has some interesting implications for enterprise architects. Here are a few that are worth highlighting:

  1. Enterprise architects are caught between a rock and a hard place: to demonstrate value they have to change things rapidly, but rapid changes are more likely to fail than succeed.
  2. The best chance of success lies in an evolutionary approach that accepts trial and error as a natural part of the process. The trick lies in selling that to management…and there are ways to do that.
  3. A corollary of (2) is that old and new elements of the landscape will necessarily have to coexist, often for periods much longer than one might expect. One must therefore design for coexistence. Above all, the focus here should be on the interfaces for these are the critical elements that enable the old and the new to “talk” to each other.
  4. Enterprise architects should be skeptical of cutting edge technologies. It almost always better to bet on proven technologies because they have the benefit of the experience of others.
  5. One of the consequences of an evolutionary process of trial and error is that benefits (or downsides) are often not evident upfront. One must therefore always keep an eye out for these unexpected features.

Finally, it is worth pointing out that an interesting feature of all the above points is that they are consistent with the principles of emergent design.

Wrapping up

In this article I’ve attempted to highlight a connection between the evolution of organizational technology landscapes and the process of biological evolution. At the heart of both lie a fundamental tension between inherent conservatism (the tendency to preserve the status quo change) and the imperative to evolve in order to adapt to changes imposed by the environment. There is no question that maintaining the status quo is never an option. The question is how to evolve in order to ensure the best chance of success. Evolution tells us that the best approach is a gradual one, via a process of trial, error and learning.

Written by K

December 16, 2015 at 7:26 am

The proof of concept – a business fable

with one comment

The Head of Data Management was a troubled man.  The phrase “Big Data” had cropped up more than a few times in a lunch conversation with Jack, a senior manager from Marketing.

Jack had pointedly asked what the data management team was doing about “Big Data”…or whether they were still stuck in a data time warp. This comment had put the Head of Data Management on the defensive. He’d mumbled something about a proof of concept with vendors specializing in Big Data solutions being on the drawing board.

He spent the afternoon dialing various vendors in his contact list, setting up appointments to talk about what they could offer.  His words were music to the vendors’ ears; he had no trouble getting them to bite.

The meetings went well. He was soon able to get a couple of vendors to agree to doing proofs of concept, which amounted to setting up trial versions of their software on the organisation’s servers, thus giving IT staff an opportunity to test-drive and compare the vendors’ offerings.

The software was duly installed and the concept duly proven.

…but the Head of Data Management was still a troubled man.

He had sought an appointment with Jack to inform him about the successful proof of concept.  Jack had listened to his spiel, but instead of being impressed had asked a simple question that had the Head of Data Management stumped.

“What has your proof of concept proved?” Jack asked.

“What do…do you mean?” stammered the Head of Data Management.

“I don’t think I can put it any clearer than I already have. What have you proved by doing this so-called proof of concept?”

“Umm… we have proved that the technology works,” came the uncertain reply.

“Surely we know that the technology works,” said Jack, a tad exasperated.

“Ah, but we don’t know that it works for us,” shot back the Head of Data Management.

“Look, I’m just a marketing guy, I know very little about IT,” said Jack, “but I do know a thing or two about product development and marketing. I can say with some confidence that the technology– whatever it is – does what it is supposed to do. You don’t need to run a proof of concept to prove that it works. I’m sure the vendors would have done that before they put their product on the market. So, my question remains: what has your proof of concept proved?”

“Well we’ve proved that it does work for us…:

“Proved it works for what??” asked Jack, exasperation mounting. “Let me put it as clearly as I can – what business problem have you solved that you could not address earlier.”

“Well we’ve taken some of the data from our largest databases – the sales database, from your area of work, and loaded it into the Big Data infrastructure.”

“…and then what?”

“Well that’s about it…for now.”

“So, all you have is another database. You haven’t actually done anything that cannot be done on our existing databases. You haven’t tackled a business problem using your new toy,” said Jack.

“Yes, but this is just the start. We’ll now start doing analytics and all that other stuff we were talking about.”

“I’m sorry to say, this is a classic example of putting the cart before the horse,” said Jack, shaking his head in disbelief.

“How so?” challenged the Head of Data Management.

“Should be obvious but may be it isn’t, so let me spell it out. You’ve jumped to a solution – the technology you’ve installed – without taking the time to define the business problems that the technology should address.”

“…but…but you talked about Big Data when we had lunch the other day.”

“Indeed I did. And what I expected was the start of a dialogue between your people and mine about the kinds of problem we would like to address. We know our problems well, you guys know the technology; but the problems should always drive the solution. What you’ve done now is akin to a solution in search of a problem, a cart before a horse”

“I see,” said the Head of Data Management slowly.

And Jack could only hope that he really did see.

Disclaimer:

This is a work of fiction. It could never happen in real life 😉

Written by K

October 21, 2015 at 7:37 am

Setting up an internal data analytics practice – some thoughts from a wayfarer

leave a comment »

Introduction

This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.

Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.

Busting a few myths

Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.

  1. Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget.   You can get started for with an outlay of 0$ – yes, that’s right, for free!  All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this  later).  No worries if you prefer to stick with familiar tools – you can even begin with Excel.
  2. Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
  3. Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing.  I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics.  It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas.  So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.

A caveat is in order at this point:  my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath.  In that spirit, I now offer some suggestions to help kick-start your efforts

Getting started

As the truism goes, the hardest part of any new effort is getting started.  The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology.  Don’t do it!  Do the following instead:

  1. Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
  2. Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter.  There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
  3. Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support.  I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
  4. Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
  5. Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work  (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
  6. Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing.  Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.

Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.

Parting thoughts

You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.

Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now.  You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.

You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized.  The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront.  Long time readers of this blog might recognize this as a tenet of emergent design.

Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.

Written by K

September 3, 2015 at 8:28 pm

TOGAF or not TOGAF… but is that the question?

with 6 comments

The ‘Holy Grail’ of effective collaboration is creating shared understanding, which is a precursor to shared commitment.” – Jeff Conklin.

Without context, words and actions have no meaning at all.” – Gregory Bateson.

I spent much of last week attending a class on the TOGAF Enterprise Architecture (EA) framework.  Prior experience with  IT frameworks such as PMBOK and ITIL had taught me that much depends on the instructor – a good one can make the material come alive whereas a not-so-good one can make it an experience akin to watching grass grow. I needn’t have worried: the instructor was superb, and my classmates, all of whom are experienced IT professionals / architects, livened up the proceedings through comments and discussions both in class and outside it. All in all, it was a thoroughly enjoyable and educative experience, something I cannot say for many of the professional courses I have attended.

One of the things about that struck me about TOGAF is the way in which the components of the framework hang together to make a coherent whole (see the introductory chapter of the framework for an overview). To be sure, there is a lot of detail within those components, but there is a certain abstract elegance – dare I say, beauty – to the framework.

That said TOGAF is (almost) entirely silent on the following question which I addressed in a post late last year:

Why is Enterprise Architecture so hard to get right?

Many answers have been offered. Here are some, extracted from articles published by IT vendors and consultancies:

  • Lack of sponsorship
  • Not engaging the business
  • Inadequate communication
  • Insensitivity to culture / policing mentality
  • Clinging to a particular tool or framework
  • Building an ivory tower
  • Wrong choice of architect

(Note: the above points are taken from this article and this one)

It is interesting that the first four issues listed are related to the fact that different stakeholders in an organization have vastly different perspectives on what an enterprise architecture initiative should achieve.  This lack of shared understanding is what makes enterprise architecture a socially complex problem rather than a technically difficult one. As Jeff Conklin points out in this article, problems that are technically complex will usually have a solution that will be acceptable to all stakeholders, whereas socially complex problems will not.  Sending a spacecraft to Mars is an example of the former whereas an organization-wide ERP  (or EA!) project or (on a global scale) climate change are instances of the latter.

Interestingly, even the fifth and sixth points in the list above – framework dogma and retreating to an ivory tower – are usually consequences of the inability to manage social complexity. Indeed, that is precisely the point made in the final item in the list: enterprise architects are usually selected for their technical skills rather than their ability to deal with ambiguities that are characteristic of social complexity.

TOGAF offers enterprise architects a wealth of tools to manage technical complexity. These need to be complemented by a suite of techniques to reconcile worldviews of different stakeholder groups.  Some examples of such techniques are Soft Systems Methodology, Polarity Management, and Dialogue Mapping. I won’t go into details of these here, but if you’re interested, please have a look at my posts entitled, The Approach – a dialogue mapping story and The dilemmas of enterprise IT for brief introductions to the latter two techniques via IT-based examples.

<Advertisement > Better yet, you could check out Chapter 9 of my book for a crash course on Soft Systems Methodology and Polarity Management and Dialogue Mapping, and the chapters thereafter for a deep dive into Dialogue Mapping </Advertisement>.

Apart from social complexity, there is the problem of context – the circumstances that shape the unique culture and features of an organization.  As I mentioned in my introductory remarks, the framework is abstract – it applies to an ideal organization in which things can be done by the book. But such an organization does not exist!  Aside from unique people-related and political issues, all organisations have their own quirks and unique features that distinguish them from other organisations, even within the same domain. Despite superficial resemblances, no two pharmaceutical companies are alike. Indeed, the differences are the whole point because they are what make a particular organization what it is. To paraphrase the words of the anthropologist, Gregory Bateson, the differences are what make a difference.

Some may argue that the framework acknowledges this and encourages, even exhorts, people to tailor the framework to their needs. Sure, the word “tailor” and its variants appear almost 700 times in the version 9.1 of the standard but, once again, there is no advice offered on how this tailoring should be done.  And one can well understand why: it is impossible to offer any sensible advice if one doesn’t know the specifics of the organization, which includes its context.

On a related note, the TOGAF framework acknowledges that there is a hierarchy of architectures ranging from the general (foundation) to the specific (organization). However despite the acknowledgement of diversity,   in practice TOGAF tends to focus on similarities between organisations. Most of the prescribed building blocks and processes are based on assumed commonalities between the structures and processes in different organisations.   My point is that, although similarities are important, architects need to focus on differences. These could be differences between the organization they are working in and the TOGAF ideal, or even between their current organization and others that they have worked with in the past (and this is where experience comes in really handy). Cataloguing and understanding these unique features –  the differences that make a difference – draws attention to precisely those issues that can cause heartburn and sleepless nights later.

I have often heard arguments along the lines of “80% of what we do follows a standard process, so it should be easy for us to standardize on a framework.” These are famous last words, because some of the 20% that is different is what makes your organization unique, and is therefore worthy of attention. You might as well accept this upfront so that you get a realistic picture of the challenges early in the game.

To sum up, frameworks like TOGAF are abstractions based on an ideal organization; they gloss over social complexity and the unique context of individual organisations.  So, questions such as the one posed in the title of this post are akin to the pseudo-choice between Coke and Pepsi, for the real issue is something else altogether. As Tom Graves tells us in his wonderful blog and book, the enterprise is a story rather than a structure, and its architecture an ongoing sociotechnical drama.

Written by K

March 17, 2015 at 8:09 pm

The danger within: internally-generated risks in projects

with 2 comments

Introduction

In their book, Waltzing with Bears, Tom DeMarco and Timothy Lister coined the phrase, “risk management is project management for adults”.  Twenty years on, it appears that their words have been taken seriously: risk management now occupies a prominent place in BOKs, and has also become a key element of project management practice.

On the other hand, if the evidence  is to be believed (as per the oft quoted Chaos Report, for example), IT projects continue to fail at an alarming rate. This is curious because one would have expected that a greater focus on risk management ought to have resulted in better outcomes.  So, is it possible at all that risk management (as it is currently preached and practiced in IT project management) cannot address certain risks…or, worse, that there are certain risks are simply not recognized as risks?

Some time ago, I came across a paper by Richard Barber that sheds some light on this very issue. This post elaborates on the nature and importance of such “hidden” risks by drawing on Barber’s work as well as my experiences and those of my colleagues with whom I have discussed the paper.

What are internally generated risks?

The standard approach to risk is based on the occurrence of events. Specifically, risk management is concerned with identifying potential adverse events and taking steps to  reduce either their probability of occurrence or their impact. However, as Barber points out, this is a limited view of risk because it overlooks adverse conditions that are built into the project environment. A good example of this is an organizational norm that centralizes decision making at the corporate or managerial level. Such a norm would discourage a project manager from taking appropriate action when confronted with an event that demands an on-the-spot decision.  Clearly, it is wrong-headed to attribute the risk to the event because the risk actually has its origins in the norm. In other words, it is an internally generated risk.

(Note: the notion of an internally generated risk is akin to the risk as a pathogen concept that I discussed in this post many years ago.)

Barber defines an internally generated risk as one that has its origin within the project organisation or its host, and arises from [preexisting] rules, policies, processes, structures, actions, decisions, behaviours or cultures. Some other examples of such risks include:

  • An overly bureaucratic PMO.
  • An organizational culture that discourages collaboration between teams.
  • An organizational structure that has multiple reporting lines – this is what I like to call a pseudo-matrix organization 🙂

These factors are similar to those that I described in my post on the systemic causes of project failure. Indeed, I am tempted to call these systemic risks because they are related to the entire system (project + organization). However, that term has already been appropriated by the financial risk community.

Since the term is relatively new, it is important to draw distinctions between internally generated and other types of risks. It is easy to do so because the latter (by definition) have their origins outside the hosting organization. A good example of the latter is the risk of a vendor not delivering a module on time or worse, going into receivership prior to delivering the code.

Finally, there are certain risks that are neither internally generated nor external. For example, using a new technology is inherently risky simply because it is new. Such a risk is inherent rather than internally generated or external.

Understanding the danger within

The author of the paper surveyed nine large projects with the intent of getting some insight into the nature of internally generated risks.  The questions he attempted to address are the following:

  1. How common are these risks?
  2. How significant are they?
  3. How well are they managed?
  4. What is the relationship between the ability of an organization to manage such risks and the organisation’s project management maturity level (i.e. the maturity of its project management processes)

Data was gathered through group workshops and one-on-one interviews in which the author asked a number of questions that were aimed at gaining insight into:

  1. The key difficulties that project managers encountered on the projects.
  2. What they perceived to be the main barriers to project success.

The aim of the one-on-one interviews was to allow for a more private setting in which sensitive issues (politics, dysfunctional PMOs and brain-dead rules / norms) could be freely discussed.

The data gathered was studied in detail, with the intent of identifying internally generated risks. The author describes the techniques he used to minimize subjectivity and to ensure that only significant risks were considered. I will omit these details here, and instead focus on his findings as they relate to the questions listed above.

Commonality of internally generated risks

Since organizational rules and norms are often flawed, one might expect that internally generated risks would be fairly common in projects. The author found that this was indeed the case with the projects he surveyed: in his words, the smallest number of non-trivial internally generated risks identified in any of the nine projects was 15, and the highest was 30!  Note: the identification of non-trivial risks was done by eliminating those risks that a wide range of stakeholders agreed as being unimportant.

Unfortunately, he does not explicitly list the most common internally-generated risks that he found. However, there are a few that he names later in the article. These are:

I suspect that experienced project managers would be able to name many more.

Significance of internally generated risks

Determining the significance of these risks is tricky because one has to figure out their probability of occurrence.  The impact is much easier to get a handle on, as one has a pretty good idea of the consequences of such risks should they eventuate. (Question: What happens if there is inadequate sponsorship? Answer: the project is highly likely to fail!).   The author attempted to get a qualitative handle on the probability of occurrence by asking relevant stakeholders to estimate the likelihood of occurrence. Based on the responses received, he found that a large fraction of the internally-generated risks are significant (high probability of occurrence and high impact).

Management of internally generated risks

To identify whether internally generated risks are well managed, the author asked relevant project teams to look at all the significant internal risks on their project and classify them as to whether or not they had been identified by the project team prior to the research. He found that in over half the cases, less than 50% of the risks had been identified. However, most of the risks that were identified were not managed!

The relationship between project management maturity and susceptibility to internally generated risk

Project management maturity refers to the level of adoption of  standard good practices within an organization. Conventional wisdom tells us that there should be an inverse correlation between maturity levels and susceptibility to internally generated risk –  the higher the maturity level, the lower the susceptibility.

The author assessed maturity levels by interviewing various stakeholders within the organization and also by comparing the processes used within the organization to well-known standards.  The results indicated a weak negative correlation – that is, organisations with a higher level of maturity tended to have a smaller number of internally generated risks. However, as the author admits, one cannot read much into this finding as the correlation was weak.

Discussion

The study suggests that internally generated risks are common and significant on projects. However, the small sample size also suggests that more comprehensive surveys are needed.  Nevertheless, anecdotal evidence from colleagues who I spoke with suggests that the findings are reasonably robust. Moreover, it is also clear (both, from the study and my conversations) that these risks are not very well managed.  There is a good reason for this:  internally generated risks originate in human behavior and / or dysfunctional structures. These tend to be a difficult topic to address in an organizational setting because people are unlikely to tell those above them in the hierarchy that they (the higher ups) are the cause of a problem.  A classic example of such a risk is estimation by decree – where a project team is told to just get it done by a certain date. Although most project managers are aware of such risks, they are reluctant to name them for obvious reasons.

Conclusion

I suspect most project managers who work in corporate environments will have had to grapple with internally generated risks in one form or another.  Although traditional risk management does not recognize these risks as risks, seasoned project managers know  from experience that people,  politics or even processes can pose major problems to smooth working of projects.  However, even when recognised for what they are, these risks can be hard to tackle because they  lie outside a project manager’s sphere of influence.  They therefore tend to become those proverbial pachyderms in the room – known to all but never discussed, let alone documented….and therein lies the danger within.

Written by K

December 10, 2014 at 7:23 pm

%d bloggers like this: