The Turing Inversion – an #AI fiction
“…Since people have to continually understand the uncertain, ambiguous, noisy speech of others, it seems they must be using something like probabilistic reasoning…” – Peter Norvig
“It seems that the discourse of nonverbal communication is precisely concerned with matters of relationship – love, hate, respect, fear, dependency, etc.—between self and others or between self and environment, and that the nature of human society is such that falsification of this discourse rapidly becomes pathogenic” – Gregory Bateson
It was what, my fourth…no, fifth visit in as many weeks; I’d been there so often, I felt like a veteran employee who’s been around forever and a year. I wasn’t complaining, but it was kind of ironic that the selection process for a company that claimed to be about “AI with heart and soul” could be so without either. I’d persisted because it offered me the best chance in years of escaping from the soulgrinding environs of a secondrate university. There’s not much call outside of academia for anthropologists with a smattering of tech expertise, so when I saw the ad that described me to a T, I didn’t hesitate.
So there I was for the nth time in n weeks.
They’d told me they really liked me, but needed to talk to me one last time before making a decision. It would be an informal chat, they said, no technical stuff. They wanted to understand what I thought, what made me tick. No, no psychometrics they’d assured me, just a conversation. With whom they didn’t say, but I had a sense my interlocutor would be one of their latest experimental models.
–x–
She was staring at the screen with a frown of intense concentration, fingers drumming a complex tattoo on the table. Ever since the early successes of Duplex with its duplicitous “um”s and “uh”s, engineers had learnt that imitating human quirks was half the trick to general AI. No intelligence required, only imitation.
I knocked on the open door gently.
She looked up, frown dissolving into a smile. Rising from her chair, she extended a hand in greeting. “Hi, you must be Carlos, she said. I’m Stella. Thanks for coming in for another chat. We really do appreciate it.”
She was disconcertingly human.
“Yes, I’m Carlos. Good to meet you Stella,” I said, mustering a professional smile. “Thanks for the invitation.”
“Please take a seat. Would you like some coffee or tea?”
“No thanks.” I sat down opposite her.
“Let me bring up your file before we start,” she said, fingers dancing over her keyboard. “Incidentally, have you read the information sheet HR sent you?”
“Yes, I have.”
“Do you have any questions about the role or today’s chat?”
“No I don’t at the moment, but may have a few as our conversation proceeds.”
“Of course,” she said, flashing that smile again.
Much of the early conversation was standard interview fare: work history, what I was doing in my current role and how it was relevant to the job I had applied for etc. Though she was impressively fluent, her responses were were well within the capabilities of the current state of art. Smile notwithstanding, I reckoned she was probably an AI.
Then she asked, “as an anthropologist, how do you think humans will react to AIs that are conversationally indistinguishable from humans?”
“We are talking about a hypothetical future,” I replied warily, “…we haven’t got to the point of indistinguishability yet.”
“Really?”
“Well… yes…at least for now.”
“OK, if you say so,” she said enigmatically, “let’s assume you’re right and treat that as a question about a ‘hypothetical’ future AI.”
“Hmm, that’s a difficult one, but let me try…most approaches to conversational AI work by figuring out an appropriate response using statistical methods. So, yes, assuming the hypothetical AI has a vast repository of prior conversations and appropriate algorithms, it could – in principle – be able to converse flawlessly.” It was best to parrot the party line, this was an AI company after all.
She was having none of that. “I hear the ‘but’ in your tone,” she said, “why don’t you tell me what you really think?”
“….Well there’s much more to human communication than words,” I replied, “more to conversations than what’s said. Humans use nonverbal cues such as changes in tone or facial expressions and gestures…”
“Oh, that’s a solved problem,” she interrupted with a dismissive gesture, “we’ve come a very long way since the primitive fakery of Duplex.”
“Possibly, but there’s more. As you probably well know, much of human conversation is about expressing emotions and…”
“…and you think AIs will not be able to do that?” she queried, looking at me squarely, daring me to disagree.
I was rattled but could not afford to show it. “Although it may be possible to design conversational AIs that appear to display emotion via, say, changes in tone, they won’t actually experience those emotions,” I replied evenly.
“Who is to say what another experiences? An AI that sounds irritated, may actually be irritated,” she retorted, sounding more than a little irritated herself.
“I’m not sure I can accept that,” I replied, “A machine may learn to display the external manifestation of a human emotion, but it cannot actually experience the emotion in the same way a human does. It is simply not wired to do that.”
“What if the wiring could be worked in?”
“It’s not so simple and we are a long way from achieving that, besides…”
“…but it could be done in principle” she interjected.
“Possibly, but I don’t see the point of it. Surely…”
“I’m sorry” she said vehemently, “I find your attitude incomprehensible. Why should machines not be able display, or indeed, even experience emotions? If we were talking about humans, you would be accused of bias!”
Whoa, a deescalation was in order. “I’m sorry,” I said, “I did not mean to offend.”
She smiled that smile again. “OK, let’s leave the contentious issue of emotion aside and go back to the communicative aspect of language. Would you agree that AIs are close to achieving near parity with humans in verbal communication?”
“Perhaps, but only in simple, transactional conversations,” I said, after a brief pause. “Complex discussions – like say a meeting to discuss a business strategy – are another matter altogether.”
“Why?”
“Well, transactional conversations are solely about conveying information. However, more complex conversations – particularly those involving people with different views – are more about building relationships. In such situations, it is more important to focus on building trust than conveying information. It is not just a matter of stating what one perceives to be correct or true because the facts themselves are contested.”
“Hmm, maybe so, but such conversations are the exception not the norm. Most human exchanges are transactional.”
“Not so. In most human interactions, nonverbal signals like tone and body language matter more than words. Indeed, it is possible to say something in a way that makes it clear that one actually means the opposite. This is particularly true with emotions. For example, if my spouse asks me how I am and I reply ‘I’m fine’ in a tired voice, I make it pretty clear that I’m anything but. Or when a boy tells a girl that he loves her, she’d do well to pay more attention to his tone and gestures than his words. The logician’s dream that humans will communicate unambiguously through language is not likely to be fulfilled.” I stopped abruptly, realising I’d strayed into contentious territory again.
“As I recall Gregory Bateson alluded to that in one of his pieces,” she responded, that disconcerting smile again.
“Indeed he did! I’m impressed that you made the connection.”
“No you aren’t,” she said, smile tightening, “It was obvious from the start that you thought I was an AI, and an AI would make the connection in a flash.”
She had taken offence again. I stammered an apology which she accepted with apparent grace.
The rest of the conversation was a blur; so unsettled was I by then.
–x–
“It’s been a fascinating conversation, Carlos,” she said, as she walked me out of the office.
“Thanks for your time,” I replied, “and my apologies again for any offence caused.”
“No offence taken,” she said, “it is part of the process. We’ll be in touch shortly.” She waved goodbye and turned away.
Silicon or sentient, I was no longer sure. What mattered, though, was not what I thought of her but what she thought of me.
–x–
References:
 Norvig, P., 2017. On Chomsky and the two cultures of statistical learning. In Berechenbarkeit der Welt? (pp. 6183). Springer VS, Wiesbaden. Available online at: http://norvig.com/chomsky.html
 Bateson, G., 1968. Redundancy and coding. Animal communication: Techniques of study and results of research, pp.614626. Reprinted in Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press, 2000, p, 418.
Seven Bridges revisited – further reflections on the map and the territory
The Seven Bridges Walk is an annual fitness and fundraising event organised by the Cancer Council of New South Wales. The picturesque 28 km circuit weaves its way through a number of waterfront suburbs around Sydney Harbour and takes in some spectacular views along the way. My friend John and I did the walk for the first time in 2017. Apart from thoroughly enjoying the experience, there was another, somewhat unexpected payoff: the walk evoked some thoughts on project management and the mapterritory relationship which I subsequently wrote up in a post on this blog.
We enjoyed the walk so much that we decided to do it again in 2018. Now, it is a truism that one cannot travel exactly the same road twice. However, much is made of the repeatability of certain kinds of experiences. For example, the discipline of project management is largely predicated on the assumption that projects are repeatable. I thought it would be interesting to see how this plays out in the case of a walk along a welldefined route, not the least because it is in many ways akin to a repeatable project.
To begin with, it is easy enough to compare the weather conditions on the two days: 29 Oct 2017 and 28 Oct 2018. A quick browse of this site gave me the data as I was after (Figure 2).
The data supports our subjective experience of the two walks. The conditions in 2017 were less than ideal for walking: clear and uncomfortably warm with a hot breeze from the north. 2018 was considerably better: cool and overcast with a gusty south wind – in other words, perfect walking weather. Indeed, one of the things we commented on the second time around was how much more pleasant it was.
But although weather conditions matter, they tell but a part of the story.
On the first walk, I took a number of photographs at various points along the way. I thought it would be interesting to take photographs at the same spots, at roughly the same time as I did the last time around, and compare how things looked a year on. In the next few paragraphs I show a few of these side by side (2017 left, 2018 right) along with some comments.
We started from Hunters Hill at about 7:45 am as we did on our first foray, and took our first photographs at Fig Tree Bridge, about a kilometre from the starting point.
The purple Jacaranda that captivated us in 2017 looks considerably less attractive the second time around (Figure 3): the tree is yet to flower and what little there is there does not show well in the clouddiffused light. Moreover, the scaffolding and roof covers on the building make for a much less attractive picture. Indeed, had the scene looked so the first time around, it is unlikely we would have considered it worthy of a photograph.
The next shot (Figure 4), taken not more than a hundred metres from the previous one, also looks considerably different: rougher waters and no kayakers in the foreground. Too cold and windy, perhaps? The weather and wind data in Fig 2 would seem to support that conclusion.
The photographs in Figure 5 were taken at Pyrmont Bridge about four hours into the walk. We already know from Figure 4 that it was considerably windier in 2018. A comparison of the flags in the two shots in Figure 5 reveal an additional detail: the wind was from opposite directions in the two years. This is confirmed by the weather information in Figure 2, which also tells us that the wind was from the north in 2017 and the south the following year (which explains the cooler conditions). We can even get an approximate temperature: the photographs were taken around 11:30 am both years, and a quick look at Figure 2 reveals that the temperature at noon was about 30 C in 2017 and 18 C in 2018.
The point about the wind direction and cloud conditions is also confirmed by comparing the photographs in Figure 6, taken at Anzac Bridge, a few kilometres further along the way (see the direction of the flag atop the pylon).
Skipping over to the final section of the walk, here are a couple of shots I took towards the end: Figure 7 shows a view from Gladesville Bridge and Figure 8 shows one from Tarban Creek Bridge. Taken together the two confirm some of the things we’ve already noted regarding the weather and conditions for photography.
Further, if you look closely at Figures 7 and 8, you will also see the differences in the flowering stage of the Jacaranda.
A detail that I did not notice until John pointed it out is that the the boat at the bottom edge of both photographs in Fig. 8 is the same one (note the colour of the furled sail)! This was surprising to us, but it should not have been so. It turns out that boat owners have to apply for private mooring licenses and are allocated positions at which they install a suitable mooring apparatus. Although this is common knowledge for boat owners, it likely isn’t so for others.
The photographs are a visual record of some of the things we encountered along the way. However, the details in recorded in them have more to do with aesthetics rather the experience – in photography of this kind, one tends to preference what looks good over what happened. Sure, some of the photographs offer hints about the experience but much of this is incidental and indirect. For example, when taking the photographs in Figures 5 and 6, it was certainly not my intention to record the wind direction. Indeed, that would have been a highly convoluted way to convey information that is directly and more accurately described by the data in Figure 2 . That said, even data has limitations: it can help fill in details such as the wind direction and temperature but it does not evoke any sense of what it was like to be there, to experience the experience, so to speak.
Neither data nor photographs are the stuff memories are made of. For that one must look elsewhere.
–x–
As Heraclitus famously said, one can never step into the same river twice. So it is with walks. Every experience of a walk is unique; although map remains the same the territory is invariably different on each traverse, even if only subtly so. Indeed, one could say that the territory is defined through one’s experience of it. That experience is not reproducible, there are always differences in the details.
As John Salvatier points out, reality has a surprising amount of detail, much of which we miss because we look but do not see. Seeing entails a deliberate focus on minutiae such as the play of morning light on the river or tree; the cool damp from last night’s rain; changes in the built environment, some obvious, others less so. Walks are made memorable by precisely such details, but paradoxically these can be hard to record in a meaningful way. Factual (aka datadriven) descriptions end up being laundry lists that inevitably miss the things that make the experience memorable.
Poets do a better job. Consider, for instance, Tennyson‘s take on a brook:
“…I chatter over stony ways,
In little sharps and trebles,
I bubble into eddying bays,
I babble on the pebbles.With many a curve my banks I fret
By many a field and fallow,
And many a fairy foreland set
With willowweed and mallow.I chatter, chatter, as I flow
To join the brimming river,
For men may come and men may go,
But I go on for ever….”
One can almost see and hear a brook. Not Tennyson’s, but one’s own version of it.
Evocative descriptions aren’t the preserve of poets alone. Consider the following description of Sydney Harbour, taken from DH Lawrence‘s Kangaroo:
“…He took himself off to the gardens to eat his custard applea pudding inside a knobbly green skinand to relax into the magic ease of the afternoon. The warm sun, the big, blue harbour with its hidden bays, the palm trees, the ferry steamers sliding flatly, the perky birds, the inevitable shabbylooking, loafing sort of men strolling across the green slopes, past the red poinsettia bush, under the big flametree, under the blue, blue skyAustralian Sydney with a magic like sleep, like sweet, soft sleepa vast, endless, sunhot, afternoon sleep with the world a mirage. He could taste it all in the soft, sweet, creamy custard apple. A wonderful sweet place to drift in….”
Written in 1923, it remains a brilliant evocation of the Harbour even today.
Tennyson’s brook and Lawrence’s Sydney do a better job than photographs or factual description, even though the latter are considered more accurate and objective. Why? It is because their words are more than mere description: they are stories that convey a sense of what it is like to be there.
–x–
The two editions of the walk covered exactly the same route, but our experiences of the territory on the two instances were very different. The differences were in details that ultimately added up to the uniqueness of each experience. These details cannot be captured by maps and visual or written records, even in principle. So although one may gain familiarity with certain aspects of a territory through repetition, each lived experience of it will be unique. Moreover, no two individuals will experience the territory in exactly the same way.
When bidding for projects, consultancies make much of their prior experience of doing similar projects elsewhere. The truth, however, is that although two projects may look identical on paper they will invariably be different in practice. The map, as Korzybski famously said, is not the territory. Even more, every encounter with the territory is different.
All this is not to say that maps (or plans or data) are useless, one needs them as orienting devices. However, one must accept that they offer limited guidance on how to deal with the daytoday events and occurrences on a project. These tend to be unique because they are highly context dependent. The lived experience of a project is therefore necessarily different from the planned one. How can one gain insight into the former? Tennyson and Lawrence offer a hint: look to the stories told by people who have traversed the territory, rather than the maps, plans and datadriven reports they produce.
Another distant horizon
It was with a sense of foreboding that I reached for my phone that Sunday morning. I had spoken with him the day before and although he did not say, I could sense he was tired.
“See you in a few weeks,” he said as he signed off, “and I’m especially looking forward to seeing the boys.”
It was not to be. Twelve hours later, a missed call from my brother Kedar and the message:
“Dad passed away about an hour or so ago…”
The rest of the day passed in a blur of travel arrangements and Things that Had to Be Done Before Leaving. My dear wife eased my way through the day.
I flew out that evening.
–x–
A difficult journey home. I’m on a plane, surrounded by strangers. I wonder how many of them are making the journey for similar reasons.
I turn to the inflight entertainment. It does not help. Switching to classical music, I drift in and out of a restless sleep.
About an hour later, I awaken to the sombre tones of Mozart’s Requiem, I cover my head with the blanket and shed a tear silently in the dark.
–x–
Monday morning, Mumbai airport, the waiting taxi and the long final leg home.
Six hours later, in the early evening I arrive to see Kedar, waiting for me on the steps, just as Dad used to.
I hug my Mum, pale but composed. She smiles and enquires about my journey. “I’m so happy to see you,” she says.
She has never worn white, and does not intend to start now. “Pass me something colourful,” she requests the night nurse, “I want to celebrate his life, not mourn his passing.”
–x–
The week in Vinchurni is a blur of visitors, many of whom have travelled from afar. I’m immensely grateful for the stories they share about my father, deeply touched that many of them consider him a father too.
Mum ensures she meets everyone, putting them at ease when the words don’t come easily. It is so hard to find the words to mourn another’s loss. She guides them – and herself – through the rituals of condolence with a grace that seem effortless. I know it is not.
–x–
Some days later, I sit in his study and the memories start to flow…
…
A Skype call on my 50^{th} Birthday.
“Many Happy Returns,” he booms, “…and remember, life begins at 50.”
He knew that from his own experience: as noted in this tribute written on his 90^{th} birthday, his best work was done after he retired from the Navy at the ripe young age of 54.
…
A conversation in Vinchurni, may be twenty years ago. We are talking about politics, the state of India and the world in general. Dad sums up the issue brilliantly:
“The problem,” he says, “is that we celebrate the mediocre. We live in an age of mediocrity.”
…
Years earlier, I’m faced with a difficult choice. I’m leaning one way, Dad recommends the other.
At one point in the conversation he says, “Son, it’s your choice but I think you are situating your appreciation instead of appreciating the situation.”
He had the uncanny knack of finding the words to get others to reconsider their illconsidered choices.
…
Fiveyearold me on a long walk with Dad and Kedar. We are at a lake in the Nilgiri Hills, where I spent much of my childhood. We collect wood and brew tea on an improvised threestone fire. Dad’s beard is singed brown as he blows on the kindling. Kedar and I think it’s hilarious and can’t stop laughing. He laughs with us.
–x–
“I have many irons in the fire,” he used to say, “and they all tend to heat up at the same time.”
It made for a hectic, fastpaced life in which he achieved a lot by attempting so much more.
This photograph sums up how I will always remember him, striding purposefully towards another distant horizon.
Peirce, Holmes and a gold chain – an essay on abductive reasoning
“It has long been an axiom of mine that the little things are infinitely the most important.” – Sir Arthur Conan Doyle (A Case of Identity)
The scientific method is a systematic approach to acquiring and establishing knowledge about how the world works. A scientific investigation typically starts with the formulation of a hypothesis – an educated, evidencebased guess about the mechanism behind the phenomenon being studied – and proceeds by testing how well the hypothesis holds up against experiments designed to disconfirm it.
Although many philosophers have waxed eloquent about the scientific method, very few of them have talked about the process of hypothesis generation. Indeed, most scientists will recognise a good hypothesis when they stumble upon one, but they usually will not be able to say how they came upon it. Hypothesis generation is essentially a creative act that requires a deep familiarity with the phenomenon in question and a spark of intuition. The latter is absolutely essential, a point captured eloquently in the following lines attributed to Einstein:
“…[Man] makes this cosmos and its construction the pivot of his emotional life in order to find in this way the peace and serenity which he cannot find in the narrow whirlpool of personal experience. The supreme task is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience can reach them…” – quoted from Zen and The Art of Motorcycle Maintenance by Robert Pirsig.
The American philosopher, Charles Peirce, recognised that hypothesis generation involves a special kind of reasoning, one that enables the investigator to zero in on a small set of relevant facts out of an infinity of possibilities.
As Pierce wrote in one of his papers:
“A given object presents an extraordinary combination of characters of which we should like to have an explanation. That there is any explanation of them is a pure assumption; and if there be, it is [a single] fact which explains them; while there are, perhaps, a million other possible ways of explaining them, if they were not all, unfortunately, false.
A man is found in the streets of New York stabbed in the back. The chief of police might open a directory and put his finger on any name and guess that that is the name of the murderer. How much would such a guess be worth? But the number of names in the directory does not approach the multitude of possible laws of attraction which could have accounted for Kepler’s law of planetary motion and, in advance of verification by predications of perturbations etc., would have accounted for them to perfection. Newton, you will say, assumed that the law would be a simple one. But what was that but piling guess on guess? Surely vastly more phenomena in nature are complex than simple…” – quoted from this paper by Thomas Sebeok.
Peirce coined the term abduction (as opposed to induction or deduction) to refer to the creative act of hypothesis generation. In the present day, the term is used to the process of justifying hypotheses rather than generating them (see this article for more on the distinction). In the remainder of this piece I will use the term in its Peircean sense.
–x–
Contrary to what is commonly stated, Arthur Conan Doyle’s fictional detective employed abductive rather than deductive methods in his cases. Consider the following lines, taken from an early paragraph of Sherlock Holmes’ most celebrated exploit, The Adventure of the Speckled Band. We join Holmes in conversation with a lady who has come to him for assistance:
…You have come in by train this morning, I see.”
“You know me, then?”
“No, but I observe the second half of a return ticket in the palm of your left glove. You must have started early, and yet you had a good drive in a dogcart, along heavy roads, before you reached the station.”
The lady gave a violent start and stared in bewilderment at my companion.
“There is no mystery, my dear madam,” said he, smiling. “The left arm of your jacket is spattered with mud in no less than seven places. The marks are perfectly fresh. There is no vehicle save a dogcart which throws up mud in that way, and then only when you sit on the lefthand side of the driver.”
“Whatever your reasons may be, you are perfectly correct,” said she. “I started from home before six, reached Leatherhead at twenty past, and came in by the first train to Waterloo…
Notice what Holmes does: he forms hypotheses about what the lady did, based on a selective observation of facts. Nothing is said about why he picks those particular facts – the ticket stub and the freshness / location of mud spatters on the lady’s jacket. Indeed, as Holmes says in another story, “You know my method. It is founded upon the observation of trifles.”
Abductive reasoning is essentially about recognising which trifles are important.
–x–
I have a gold chain that my mum gave me many years ago. I’ve worn it for so long that now I barely notice it. The only time I’m consciously aware that I’m wearing the chain is when I finger it around my neck, a (nervous?) habit I have developed over time.
As you might imagine, I’m quite attached to my gold chain. So, when I discovered it was missing a few days ago, my first reaction was near panic, I felt like a part of me had gone missing.
Since I hardly ever take the chain off, I could not think of any plausible explanation for how I might have lost it. Indeed, the only time I have had to consistently take the chain off is when going in for an Xray or, on occasion, through airport security.
Where could it have gone?
After mulling over it for a while, the only plausible explanation I could think of is that I had taken it off at airport security when returning from an overseas trip a week earlier, and had somehow forgotten to collect it on the other side. Realising that it would be near impossible to recover it, I told myself to get used to the idea that it was probably gone for good.
That Sunday, I went for a swim. After doing my laps, I went to the side of the pool for my customary shower. Now, anyone who has taken off a rash vest after a swim will know that it can be a struggle. I suspect this is because water trapped between skin and fabric forms a thin adhesive layer (a manifestation of surface tension perhaps?). Anyway, I wrestled the garment over my head and it eventually came free with a snap, generating a spray of droplets that gleamed in reflected light.
Later in the day, I was at the movies. For some reason, when coming out of the cinema, I remembered the rash vest and the flash of droplets. Hmm, I thought, “a gleam of gold….”
…A near forgotten memory: I vaguely recalled a flash of gold while taking of my rash vest in the pool shower some days ago. Was it after my previous swim or the week earlier, I couldn’t be sure. But I distinctly remembered it had bothered me enough to check the floor of the cubicle cursorily. Finding nothing, I had completely forgotten about it and moved on.
Could it have come off there?
As I thought about it some more, possibility turned to plausibility: I was convinced it was what had happened. Although unlikely I would find it there now, it was worth a try on the hope that someone had found the chain and turned it in as lost property.
I stopped over at the pool on my way back from the movies and asked at reception.
“A gold chain? Hmm, I think you may be in luck,” he said, “I was doing an inventory of lost property last week and came across a chain. I was wondering why no one had come in to claim something so valuable.”
“You’re kidding,” I said, incredulous. “You mean you have a gold chain?”
“Yeah, and I’m pretty sure it will still be there unless someone else has claimed it,” he replied. “I’ll have a look in the safe. Can you describe it for me?”
I described it down to the brief inscription on the clasp.
“Wait here,” he said, “I’ll be a sec”
It took longer than that but he soon emerged, chain in hand.
I could not believe my eyes; I had given up on ever recovering it. “Thanks, so much” I said fervently, “you won’t believe how much it means to me to have found this.”
“No worries mate,” he said, smiling broadly. “Happy to have helped.”
–x–
Endnote: in case you haven’t read it yet, I recommend you take ten minutes to read Sherlock Holmes’ finest adventure and abductive masterpiece, The Adventure of the Speckled Band.
Learning, evolution and the future of work
The Janusheaded rise of AI has prompted many discussions about the future of work. Most, if not all, are about AIdriven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines. This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automationproof in the mediumterm future.
This begs the question as to what we can do, as individuals, to prepare for a machinecentric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as my consulting and teaching experiences.
Levels of learning
Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.
So what can one do to prepare for an ambiguous and essentially unpredictable future?
To answer this question, we need to delve into an important, yet oftoverlooked aspect of learning.
A key characteristic of learning is that it is driven by trial and error. To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight. Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.
In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:
If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.
Let’s unpack this claim by looking at his proposed classification:
Zero order learning – Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning. Such learning is hardwired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.
First order learning (Learning I) – Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.
A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetitionbased process of trial and error.
As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.
Second order learning (Learning II) – Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions. Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.
Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.
Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…” Two aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of the situation. Secondly, once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.
Sensemaking and second order learning
In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail. Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand. The greater the diversity, the more complete the exploration of the problem space.
Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem. As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.” But this begs the question as to how these alternatives are to be determined. The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives. Surfacing these by asking the right questions is the task of sensemaking.
As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:
 What kind of business are you in?
 Who are your customers?
 What’s the competitive landscape?
 …and so on
Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.
It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context: Kodak is a wellknown example, there are many more. So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together. Such coevolution is as true of natural systems as it is of social ones. As Bateson tells us:
…the evolution of the horse from Eohippus was not a onesided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.
Indeed, one can think of evolution by natural selection as a process by which nature learns (in a secondorder sense).
The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that secondorder learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes. In view of this it seems more practical for organisations to focus on developing what one might call learning individuals – people who are capable of adapting to changes in their environment by continual learning.
Learning to learn
Cliches aside, the modern workplace is characterised by rapid, technologydriven change. It is difficult for an individual to keep up because one has to:

 Figure out which changes are significant and therefore worth responding to.
 Be capable of responding to them meaningfully.
The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence. One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?
The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important secondorder skill of learning to learn has fallen by the wayside. In reality, though, it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.
Back to the future of work
When industry complains about new graduates being illprepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and Bateson’s notion of second order learning tells us why:
 Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
 Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn
In my experience, it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my consulting work. Two examples may help illustrate how this works in the classroom:
 Many analyticallyinclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
 When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a twostep approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms. The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful coteacher, Alex, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.
In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:
Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.
The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.
An intuitive introduction to support vector machines using R – Part 1
About a year ago, I wrote a piece on support vector machines as a part of my gentle introduction to data science R series. So it is perhaps appropriate to begin this piece with a few words about my motivations for writing yet another article on the topic.
Late last year, a curriculum lead at DataCamp got in touch to ask whether I’d be interested in developing a course on SVMs for them.
My answer was, obviously, an enthusiastic “Yes!”
Instead of rehashing what I had done in my previous article, I thought it would be interesting to try an approach that focuses on building an intuition for how the algorithm works using examples of increasing complexity, supported by visualisation rather than math. This post is the first part of a twopart series based on this approach.
The article builds up some basic intuitions about support vector machines (abbreviated henceforth as SVM) and then focuses on linearly separable problems. Part 2 (to be released at a future date) will deal with radially separable and more complex data sets. The focus throughout is on developing an understanding what the algorithm does rather than the technical details of how it does it.
Prerequisites for this series are a basic knowledge of R and some familiarity with the ggplot package. However, even if you don’t have the latter, you should be able to follow much of what I cover so I encourage you to press on regardless.
<advertisement> if you have a DataCamp account, you may want to check out my course on support vector machines using R. Chapters 1 and 2 of the course closely follow the path I take in this article. </advertisement>
A one dimensional example
A soft drink manufacturer has two brands of their flagship product: Choke (sugar content of 11g/100ml) and ChokeR (sugar content 8g/100 ml). The actual sugar content can vary quite a bit in practice so it can sometimes be hard to figure out the brand given the sugar content. Given sugar content data for 25 samples taken randomly from both populations (see file sugar_content.xls), our task is to come up with a decision rule for determining the brand.
Since this is onevariable problem, the simplest way to discern if the samples fall into distinct groups is through visualisation. Here’s one way to do this using ggplot:
…and here’s the resulting plot:
Note that we’ve simulated a onedimensional plot by setting all the y values to 0.
From the plot, it is evident that the samples fall into distinct groups: low sugar content, bounded above by the 8.8 g/100ml sample and high sugar content, bounded below by the 10 g/100ml sample.
Clearly, any point that lies between the two points is an acceptable decision boundary. We could, for example, pick 9.1g/100ml and 9.7g/100ml. Here’s the R code with those points added in. Note that we’ve made the points a bit bigger and coloured them red to distinguish them from the sample points.
label=d_bounds$sep, size=2.5,
vjust=2, hjust=0.5, colour=”red”)
And here’s the plot:
Now, a bit about the decision rule. Say we pick the first point as the decision boundary, the decision rule would be:
Say we pick 9.1 as the decision boundary, our classifier (in R) would be:
The other one is left for you as an exercise.
Now, it is pretty clear that although either these points define an acceptable decision boundary, neither of them are the best. Let’s try to formalise our intuitive notion as to why this is so.
The margin is the distance between the points in both classes that are closest to the decision boundary. In case at hand, the margin is 1.2 g/100ml, which is the difference between the two extreme points at 8.8 g/100ml (ChokeR) and 10 g/100ml (Choke). It should be clear that the best separator is the one that lies halfway between the two extreme points. This is called the maximum margin separator. The maximum margin separator in the case at hand is simply the average of the two extreme points:
geom_point(data=mm_sep,aes(x=mm_sep$sep, y=c(0)), colour=”blue”, size=4)
And here’s the plot:
We are dealing with a one dimensional problem here so the decision boundary is a point. In a moment we will generalise this to a two dimensional case in which the boundary is a straight line.
Let’s close this section with some general points.
Remember this is a sample not the entire population, so it is quite possible (indeed likely) that there will be as yet unseen samples of ChokeR and Choke that have a sugar content greater than 8.8 and less than 10 respectively. So, the best classifier is one that lies at the greatest possible distance from both classes. The maximum margin separator is that classifier.
This toy example serves to illustrate the main aim of SVMs, which is to find an optimal separation boundary in the sense described here. However, doing this for real life problems is not so simple because life is not one dimensional. In the remainder of this article and its yettobewritten sequel, we will work through examples of increasing complexity so as to develop a good understanding of how SVMs work in addition to practical experience with using the popular SVM implementation in R.
<Advertisement> Again, for those of you who have DataCamp premium accounts, here is a course that covers pretty much the entire territory of this two part series. </Advertisement>
Linearly separable case
The next level of complexity is a two dimensional case (2 predictors) in which the classes are separated by a straight line. We’ll create such a dataset next.
Let’s begin by generating 200 points with attributes x1 and x2, randomly distributed between 0 and 1. Here’s the R code:
Let’s visualise the generated data using a scatter plot:
And here’s the plot
Now let’s classify the points that lie above the line x1=x2 as belonging to the class +1 and those that lie below it as belonging to class 1 (the class values are arbitrary choices, I could have chosen them to be anything at all). Here’s the R code:
Let’s modify the plot in Figure 4, colouring the points classified as +1n blue and those classified 1 red. For good measure, let’s also add in the decision boundary. Here’s the R code:
Note that the parameters in geom_abline() are derived from the fact that the line x1=x2 has slope 1 and y intercept 0.
Here’s the resulting plot:
Next let’s introduce a margin in the dataset. To do this, we need to exclude points that lie within a specified distance of the boundary. A simple way to approximate this is to exclude points that have x1 and x2 values that differ by less a prespecified value, delta. Here’s the code to do this with delta set to 0.05 units.
The check on the number of datapoints tells us that a number of points have been excluded.
Running the previous ggplot code block yields the following plot which clearly shows the reduced dataset with the depopulated region near the decision boundary:
Let’s add the margin boundaries to the plot. We know that these are parallel to the decision boundary and lie delta units on either side of it. In other words, the margin boundaries have slope=1 and y intercepts delta and –delta. Here’s the ggplot code:
And here’s the plot with the margins:
OK, so we have constructed a dataset that is linearly separable, which is just a short code for saying that the classes can be separated by a straight line. Further, the dataset has a margin, i.e. there is a “gap” so to speak, between the classes. Let’s save the dataset so that we can use it in the next section where we’ll take a first look at the svm() function in the e1071 package.
That done, we can now move on to…
Linear SVMs
Let’s begin by reading in the datafile we created in the previous section:
We then split the data into training and test sets using an 80/20 random split. There are many ways to do this. Here’s one:
The next step is to build the an SVM classifier model. We will do this using the svm() function which is available in the e1071 package. The svm() function has a range of parameters. I explain some of the key ones below, in particular, the following parameters: type, cost, kernel and scale. It is recommended to have a browse of the documentation for more details.
The type parameter specifies the algorithm to be invoked by the function. The algorithm is capable of doing both classification and regression. We’ll focus on classification in this article. Note that there are two types of classification algorithms, nu and C classification. They essentially differ in the way that they penalise margin and boundary violations, but can be shown to lead to equivalent results. We will stick with C classification as it is more commonly used. The “C” refers to the cost which we discuss next.
The cost parameter specifies the penalty to be applied for boundary violations. This parameter can vary from 0 to infinity (in practice a large number compared to 0, say 10^6 or 10^8). We will explore the effect of varying cost later in this piece. To begin with, however, we will leave it at its default value of 1.
The kernel parameter specifies the kind of function to be used to construct the decision boundary. The options are linear, polynomial and radial. In this article we’ll focus on linear kernels as we know the decision boundary is a straight line.
The scale parameter is a Boolean that tells the algorithm whether or not the datapoints should be scaled to have zero mean and unit variance (i.e. shifted by the mean and scaled by the standard deviation). Scaling is generally good practice to avoid undue influence of attributes that have unduly large numeric values. However, in this case we will avoid scaling as we know the attributes are bounded and (more important) we would like to plot the boundary obtained from the algorithm manually.
Building the model is a simple oneline call, setting appropriate values for the parameters:
We expect a linear model to perform well here since the dataset it is linear by construction. Let’s confirm this by calculating training and test accuracy. Here’s the code:
The perfect accuracies confirm our expectation. However, accuracies by themselves are misleading because the story is somewhat more nuanced. To understand why, let’s plot the predicted decision boundary and margins using ggplot. To do this, we have to first extract information regarding these from the svm model object. One can obtain summary information for the model by typing in the model name like so:
kernel = “linear”, scale = FALSE)
Which outputs the following: the function call, SVM type, kernel and cost (which is set to its default). In case you are wondering about gamma, although it’s set to 0.5 here, it plays no role in linear SVMs. We’ll say more about it in the sequel to this article in which we’ll cover more complex kernels. More interesting are the support vectors. In a nutshell, these are training dataset points that specify the location of the decision boundary. We can develop a better understanding of their role by visualising them. To do this, we need to know their coordinates and indices (position within the dataset). This information is stored in the SVM model object. Specifically, the index element of svm_model contains the indices of the training dataset points that are support vectors and the SV element lists the coordinates of these points. The following R code lists these explicitly (Note that I’ve not shown the outputs in the code snippet below):
Let’s use the indices to visualise these points in the training dataset. Here’s the ggplot code to do that:
And here is the plot:
We now see that the support vectors are clustered around the boundary and, in a sense, serve to define it. We will see this more clearly by plotting the predicted decision boundary. To do this, we need its slope and intercept. These aren’t available directly available in the svm_model, but they can be extracted from the coefs, SV and rho elements of the object.
The first step is to use coefs and the support vectors to build the what’s called the weight vector. The weight vector is given by the product of the coefs matrix with the matrix containing the SVs. Note that the fact that only the support vectors play a role in defining the boundary is consistent with our expectation that the boundary should be fully specified by them. Indeed, this is often touted as a feature of SVMs in that it is one of the few classifiers that depends on only a small subset of the training data, i.e. the datapoints closest to the boundary rather than the entire dataset.
Once we have the weight vector, we can calculate the slope and intercept of the predicted decision boundary as follows:
Note that the slope and intercept are quite different from the correct values of 1 and 0 (reminder: the actual decision boundary is the line x1=x2 by construction). We’ll see how to improve on this shortly, but before we do that, let’s plot the decision boundary using the slope and intercept we have just calculated. Here’s the code:
And here’s the augmented plot:
The plot clearly shows how the support vectors “support” the boundary – indeed, if one draws line segments from each of the points to the boundary in such a way that the intersect the boundary at right angles, the lines can be thought of as “holding the boundary in place”. Hence the term support vector.
This is a good time to mention that the e1071 library provides a builtin plot method for svm function. This is invoked as follows:
The svm plot function takes a formula specifying the plane on which the boundary is to be plotted. This is not necessary here as we have only two predictors (x1 and x2) which automatically define a plane.
Here is the plot generated by the above code:
Note that the axes are switched (x1 is on the y axis). Aside from that, the plot is reassuringly similar to our ggplot version in Figure 9. Also note that that the support vectors are marked by “x”. Unfortunately the built in function does not display the margin boundaries, but this is something we can easily add to our homebrewed plot. Here’s how. We know that the margin boundaries are parallel to the decision boundary, so all we need to find out is their intercept. It turns out that the intercepts are offset by an amount 1/w[2] units on either side of the decision boundary. With that information in hand we can now write the the code to add in the margins to the plot shown in Figure 9. Here it is:
geom_abline(slope=slope_1,intercept = intercept_1+1/w[2], linetype=”dashed”)
And here is the plot with the margins added in:
Note that the predicted margins are much wider than the actual ones (compare with Figure 7). As a consequence, many of the support vectors lie within the predicted margin – that is, they violate it. The upshot of the wide margin is that the decision boundary is not tightly specified. This is why we get a significant difference between the slope and intercept of predicted decision boundary and the actual one. We can sharpen the boundary by narrowing the margin. How do we do this? We make margin violations more expensive by increasing the cost. Let’s see this marginnarrowing effect in action by building a model with cost = 100 on the same training dataset as before. Here is the code:
I’ll leave you to calculate the training and test accuracies (as one might expect, these will be perfect).
Let’s inspect the cost=100 model:
kernel = “linear”,cost=100, scale = FALSE)
The number of support vectors is reduced from 55 to 6! We can plot these and the boundary / margin lines using ggplot as before. The code is identical to the previous case (see code block preceding Figure 8). If you run it, you will get the plot shown in Figure 12.
Since the boundary is more tightly specified, we would expect the slope and intercept of the predicted boundary to be considerably closer to their actual values of 1 and 0 respectively (as compared to the default cost case). Let’s confirm that this is so by calculating the slope and intercept as we did in the code snippets preceding Figure 9. Here’s the code:
Which nicely confirms our expectation.
The decision boundary and margins for the high cost case can also be plotted with the code shown earlier. Her it is for completeness:
geom_abline(slope=slope_100,intercept = intercept_100+1/w[2], linetype=”dashed”)
And here’s the plot:
SVMs that allow margin violations are called soft margin classifiers and those that do not are called hard. In this case, the hard margin classifier does a better job because it specifies the boundary more accurately than its soft counterpart. However, this does not mean that hard margin classifier are to be preferred over soft ones in all situations. Indeed, in real life, where we usually do not know the shape of the decision boundary upfront, soft margin classifiers can allow for a greater degree of uncertainty in the decision boundary thus improving generalizability of the classifier.
OK, so now we have a good feel for what the SVM algorithm does in the linearly separable case. We will round out this article by looking at a real world dataset that fortuitously turns out to be almost linearly separable: the famous (notorious?) iris dataset. It is instructive to look at this dataset because it serves to illustrate another feature of the e1071 SVM algorithm – its capability to handle classification problems that have more than 2 classes.
A multiclass problem
The iris dataset is wellknown in the machine learning community as it features in many introductory courses and tutorials. It consists of 150 observations of 3 species of the iris flower – setosa, versicolor and virginica. Each observation consists of numerical values for 4 independent variables (predictors): petal length, petal width, sepal length and sepal width. The dataset is available in a standard installation of R as a built in dataset. Let’s read it in and examine its structure:
Now, as it turns out, petal length and petal width are the key determinants of species. So let’s create a scatterplot of the datapoints as a function of these two variables (i.e. project each data point on the petal lengthpetal width plane). We will also distinguish between species using different colour. Here’s the ggplot code to do this:
And here’s the plot:
On this plane we see a clear linear boundary between setosa and the other two species, versicolor and virginica. The boundary between the latter two is almost linear. Since there are four predictors, one would have to plot the other combinations to get a better feel for the data. I’ll leave this as an exercise for you and move on with the assumption that the data is nearly linearly separable. If the assumption is grossly incorrect, a linear SVM will not work well.
Up until now, we have discussed binary classification problem, i.e. those in which the predicted variable can take on only two values. In this case, however, the predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The SVM algorithm does this using a oneagainstone classification strategy. Here’s how it works:
 Divide the dataset (assumed to have N classes) into N(N1)/2 datasets that have two classes each.
 Solve the binary classification problem for each of these subsets
 Use a simple voting mechanism to assign a class to each data point.
Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.
With that said, let’s get on with building the classifier. As before, we begin by splitting the data into training and test sets using an 80/20 random split. Here is the code to do this:
Then we build the model (default cost) and examine it:
The main thing to note is that the function call is identical to the binary classification case. We get some basic information about the model by typing in the model name as before:
kernel = “linear”)
And the train and test accuracies are computed in the usual way:
This looks good, but is potentially misleading because it is for a particular train/test split. Remember, in this case, unlike the earlier example, we do not know the shape of the actual decision boundary. So, to get a robust measure of accuracy, we should calculate the average test accuracy over a number of train/test partitions. Here’s some code to do that:
Which is not too bad at all, indicating that the dataset is indeed nearly linearly separable. If you try different values of cost you will see that it does not make much difference to the average accuracy.
This is a good note to close this piece on. Those who have access to DataCamp premium courses will find that the content above is covered in chapters 1 and 2 of the course on support vector machines in R. The next article in this twopart series will cover chapters 3 and 4.
Summarising
My main objective in this article was to help develop an intuition for how SVMs work in simple cases. We illustrated the basic principles and terminology with a simple 1 dimensional example and then worked our way to linearly separable binary classification problems with multiple predictors. We saw how the latter can be solved using a popular svm implementation available in R. We also saw that the algorithm can handle multiclass problems. All through, we used visualisations to see what the algorithm does and how the key parameters affect the decision boundary and margins.
In the next part (yet to be written) we will see how SVMs can be generalised to deal with complex, nonlinear decision boundaries. In essence, the use a mathematical trick to “linearise” these boundaries. We’ll delve into details of this trick in an intuitive, visual way as we have done here.
Many thanks for reading!
A gentle introduction to Monte Carlo simulation for project managers
This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management – estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use.
Despite the bad rap spreadsheets get from tech types – and I have to admit that many of their complaints are justified – the fact is, Excel remains one of the most ubiquitous “computational” tools in the corporate world. Most business professionals would have used it at one time or another. So, if you you’re a project manager and want the rationale behind your estimates to be accessible to the widest possible audience, you are probably better off presenting them in Excel than in SPSS, SAS, Python, R or pretty much anything else. Consequently, the tool I’ll use in this article is Microsoft Excel. For those who know about Monte Carlo and want to cut to the chase, here’s the Excel workbook containing all the calculations detailed in later sections. However, if you’re unfamiliar with the technique, you may want to have a read of the article before playing with the spreadsheet.
In keeping with the format of the tutorials on this blog, I’ve assumed very little prior knowledge about probability, let alone Monte Carlo simulation. Consequently, the article is verbose and the tone somewhat didactic.
Introduction
Estimation is key part of a project manager’s role. The most frequent (and consequential) estimates they are asked deliver relate to time and cost. Often these are calculated and presented as point estimates: i.e. single numbers – as in, this task will take 3 days. Or, a little better, as twopoint ranges – as in, this task will take between 2 and 5 days. Better still, many use a PERTlike approach wherein estimates are based on 3 points: best, most likely and worst case scenarios – as in, this task will take between 2 and 5 days, but it’s most likely that we’ll finish on day 3. We’ll use threepoint estimates as a starting point for Monte Carlo simulation, but first, some relevant background.
It is a truism, well borne out by experience, that it is easier to estimate small, simple tasks than large, complex ones. Indeed, this is why one of the early todos in a project is the construction of a work breakdown structure. However, a problem arises when one combines the estimates for individual elements into an overall estimate for a project or a phase thereof. It is that a straightforward addition of individual estimates or bounds will almost always lead to a grossly incorrect estimation of overall time or cost. The reason for this is simple: estimates are necessarily based on probabilities and probabilities do not combine additively. Monte Carlo simulation provides a principled and intuitive way to obtain probabilistic estimates at the level of an entire project based on estimates of the individual tasks that comprise it.
The problem
The best way to explain Monte Carlo is through a simple worked example. So, let’s consider a 4 task project shown in Figure 1. In the project, the second task is dependent on the first, and third and fourth are dependent on the second but not on each other. The upshot of this is that the first two tasks have to be performed sequentially and the last two can be done at the same time, but can only be started after the second task is completed.
To summarise: the first two tasks must be done in series and the last two can be done in parallel.
Figure 1 also shows the three point estimates for each task – that is the minimum, maximum and most likely completion times. For completeness I’ve listed them below:
 Task 1 – Min: 2 days; Most Likely: 4 days; Max: 8 days
 Task 2 – Min: 3 days; Most Likely: 5 days; Max: 10 days
 Task 3 – Min: 3 days; Most Likely: 6 days; Max: 9 days
 Task 4 – Min: 2 days; Most Likely: 4 days; Max: 7 days
OK, so that’s the situation as it is given to us. The first step to developing an estimate is to formulate the problem in a way that it can be tackled using Monte Carlo simulation. This bring us to the important topic of the shape of uncertainty aka probability distributions.
The shape of uncertainty
Consider the data for Task 1. You have been told that it most often finishes on day 4. However, if things go well, it could take as little as 2 days; but if things go badly it could take as long as 8 days. Therefore, your range of possible finish times (outcomes) is between 2 to 8 days.
Clearly, each of these outcomes is not equally likely. The most likely outcome is that you will finish the task in 4 days (from what your team member has told you). Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 2.
Figure 2 begs a couple of questions:
 What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
 How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days? Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.
The two questions are actually related. As we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.
Since we don’t know anything about intermediate times (I’m assuming there is no other historical data available), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 3. This gives us the wellknown triangular distribution.
Jargon Buster: The term distribution is simply a fancy word for a plot of likelihood vs. time.
Of course, this isn’t the only possibility; there are an infinite number of others. Figure 4 is another (admittedly weird) example.
Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (for example, if your team member calls in sick for two weeks) or even not be completed at all (for example, if she then leaves for that mythical greener pasture). Catering for the latter possibility, the shape of the likelihood might resemble Figure 5.
The main takeaway from the above is that uncertainties should be expressed as shapes rather than numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages.
[Aside: you may have noticed that all the distributions shown above are skewed to the right – that is they have a long tail. This is a general feature of distributions that describe time (or cost) of project tasks. It would take me too far afield to discuss why this is so, but if you’re interested you may want to check out my post on the inherent uncertainty of project task estimates.
From likelihood to probability
Thus far, I have used the word “likelihood” without bothering to define it. It’s time to make the notion more precise. I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?
Consider the following:
 If an event is impossible, its likelihood should be zero.
 The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.
In terms of the example above, if we denote time by and the likelihood by then:
for and
And
where
Where denotes the sum of all nonzero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 5. (Technical Note: Since is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)
is , in fact, what mathematicians call probability– which explains why I have used the symbol rather than . Now that I’ve explained what it is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.
With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 3 and the weird shape in figure 4) must equal 1, and we’ll do this next. Indeed, for the problem at hand, we’ll assume that all four task durations can be fitted to triangular distributions. This is primarily to keep things simple. However, I should emphasise that you can use any shape so long as you can express it mathematically, and I’ll say more about this towards the end of this article.
The triangular distribution
Let’s look at the estimate for Task 1. We have three numbers corresponding to a minimum, most likely and maximum time. To keep the discussion general, we’ll call these , and respectively, (we’ll get back to our estimator’s specific numbers later).
Now, what about the probabilities associated with each of these times?
Since and correspond to the minimum and maximum times, the probability associated with these is zero. Why? Because if it wasn’t zero, then there would be a nonzero probability of completion for a time less than or greater than – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously – so if it takes on nonzero value, , at then it must take on a value slightly less than – but greater than 0 – at slightly smaller than ] . As far as the most likely time, , is concerned: by definition, the probability attains its highest value at time . So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 6 below.
For the simulation, we need to know the equation describing the above distribution. Although Wikipedia will tell us the answer in a mouseclick, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to 1 because the task must finish at some time between and . As a consequence we have:
where is the probability corresponding to time . With a bit of rearranging we get,
To derive the probability for any time lying between and , we note that:
This is a consequence of the fact that the ratios on either side of equation (3) are equal to the slope of the line joining the points and .
Substituting (2) in (3) and simplifying a bit, we obtain:
for
In a similar fashion one can show that the probability for times lying between and is given by:
for
Equations 4 and 5 together describe the probability distribution function (or PDF) for all times between and .
As it turns out, in Monte Carlo simulations, we don’t directly work with the probability distribution function. Instead we work with the cumulative distribution function (or CDF) which is the probability, , that the task is completed by time . To reiterate, the PDF, , is the probability of the task finishing at time whereas the CDF, , is the probability of the task completing by time . The CDF, , is essentially a sum of all probabilities between and . For this is the area under the triangle with apexes at (, 0), (t, 0) and (t, p(t)). Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get:
for
Noting that for , the area under the curve equals the total area minus the area enclosed by the triangle with base between t and , we have:
for
As expected, starts out with a value 0 at and then increases monotonically, attaining a value of 1 at .
To end this section let’s plug in the numbers quoted by our estimator at the start of this section: , and . The resulting PDF and CDF are shown in figures 8 and 9.
Monte Carlo in a minute
Now with all that conceptual work done, we can get to the main topic of this post: Monte Carlo estimation. The basic idea behind Monte Carlo is to simulate the entire project (all 4 tasks in this case) a large number N (say 10,000) times and thus obtain N overall completion times. In each of the N trials, we simulate each of the tasks in the project and add them up appropriately to give us an overall project completion time for the trial. The resulting N overall completion times will all be different, ranging from the sum of the minimum completion times to the sum of the maximum completion times. In other words, we will obtain the PDF and CDF for the overall completion time, which will enable us to answer questions such as:
 How likely is it that the project will be completed within 17 days?
 What’s the estimated time for which I can be 90% certain that the project will be completed? For brevity, I’ll call this the 90% completion time in the rest of this piece.
“OK, that sounds great”, you say, “but how exactly do we simulate a single task”?
Good question, and I was just about to get to that…
Simulating a single task using the CDF
As we saw earlier, the CDF for the triangular has a S shape and ranges from 0 to 1 in value. It turns out that the S shape is characteristic of all CDFs, regardless of the details underlying PDF. Why? Because, the cumulative probability must lie between 0 and 1 (remember, probabilities can never exceed 1, nor can they be negative).
OK, so to simulate a task, we:
 generate a random number between 0 and 1, this corresponds to the probability that the task will finish at time t.
 find the time, t, that this corresponds to this value of probability. This is the completion time for the task for this trial.
Incidentally, this method is called inverse transform sampling.
An example might help clarify how inverse transform sampling works. Assume that the random number generated is 0.4905. From the CDF for the first task, we see that this value of probability corresponds to a completion time of 4.503 days, which is the completion for this trial (see Figure 10). Simple!
In this case we found the time directly from the computed CDF. That’s not too convenient when you’re simulating the project 10,000 times. Instead, we need a programmable math expression that gives us the time corresponding to the probability directly. This can be obtained by solving equations (6) and (7) for . Some straightforward algebra, yields the following two expressions for :
for
And
for
These can be easily combined in a single Excel formula using an IF function, and I’ll show you exactly how in a minute. Yes, we can now finally get down to the Excel simulation proper and you may want to download the workbook if you haven’t done so already.
The simulation
Open up the workbook and focus on the first three columns of the first sheet to begin with. These simulate the first task in Figure 1, which also happens to be the task we have used to illustrate the construction of the triangular distribution as well as the mechanics of Monte Carlo.
Rows 2 to 4 in columns A and B list the min, most likely and max completion times while the same rows in column C list the probabilities associated with each of the times. For the probability is 0 and for it is 1. The probability at can be calculated using equation (6) which, for , reduces to
Rows 6 through 10005 in column A are simulated probabilities of completion for Task 1. These are obtained via the Excel RAND() function, which generates uniformly distributed random numbers lying between 0 and 1. This gives us a list of probabilities corresponding to 10,000 independent simulations of Task 1.
The 10,000 probabilities need to be translated into completion times for the task. This is done using equations (8) or (9) depending on whether the simulated probability is less or greater than , which is in cell C3 (and given by Equation (10) above). The conditional statement can be coded in an Excel formula using the IF() function.
Tasks 24 are coded in exactly the same way, with distribution parameters in rows 2 through 4 and simulation details in rows 6 through 10005 in the columns listed below:
 Task 2 – probabilities in column D; times in column F
 Task 3 – probabilities in column H; times in column I
 Task 4 – probabilities in column K; times in column L
That’s basically it for the simulation of individual tasks. Now let’s see how to combine them.
For tasks in series (Tasks 1 and 2), we simply sum the completion times for each task to get the overall completion times for the two tasks. This is what’s shown in rows 6 through 10005 of column G.
For tasks in parallel (Tasks 3 and 4), the overall completion time is the maximum of the completion times for the two tasks. This is computed and stored in rows 6 through 10005 of column N.
Finally, the overall project completion time for each simulation is then simply the sum of columns G and N (shown in column O)
Sheets 2 and 3 are plots of the probability and cumulative probability distributions for overall project completion times. I’ll cover these in the next section.
Discussion – probabilities and estimates
The figure on Sheet 2 of the Excel workbook (reproduced in Figure 11 below) is the probability distribution function (PDF) of completion times. The xaxis shows the elapsed time in days and the yaxis the number of Monte Carlo trials that have a completion time that lie in the relevant time bin (of width 0.5 days). As an example, for the simulation shown in the Figure 11, there were 882 trials (out of 10,000) that had a completion time that lie between 16.25 and 16.75 days. Your numbers will vary, of course, but you should have a maximum in the 16 to 17 day range and a trial number that is reasonably close to the one I got.
I’ll say a bit more about Figure 11 in the next section. For now, let’s move on to Sheet 3 of workbook which shows the cumulative probability of completion by a particular day (Figure 12 below). The figure shows the cumulative probability function (CDF), which is the sum of all completion times from the earliest possible completion day to the particular day.
To reiterate a point made earlier, the reason we work with the CDF rather than the PDF is that we are interested in knowing the probability of completion by a particular date (e.g. it is 90% likely that we will finish by April 20th) rather than the probability of completion on a particular date (e.g. there’s a 10% chance we’ll finish on April 17th). We can now answer the two questions we posed earlier. As a reminder, they are:
 How likely is it that the project will be completed within 17 days?
 What’s the 90% likely completion time?
Both questions are easily answered by using the cumulative distribution chart on Sheet 3 (or Fig 12). Reading the relevant numbers from the chart, I see that:
 There’s a 60% chance that the project will be completed in 17 days.
 The 90% likely completion time is 19.5 days.
How does the latter compare to the sum of the 90% likely completion times for the individual tasks? The 90% likely completion time for a given task can be calculated by solving Equation 9 for $t$, with appropriate values for the parameters , and plugged in, and set to 0.9. This gives the following values for the 90% likely completion times:
 Task 1 – 6.5 days
 Task 2 – 8.1 days
 Task 3 – 7.7 days
 Task 4 – 5.8 days
Summing up the first three tasks (remember, Tasks 3 and 4 are in parallel) we get a total of 22.3 days, which is clearly an overestimation. Now, with the benefit of having gone through the simulation, it is easy to see that the sum of 90% likely completion times for individual tasks does not equal the 90% likely completion time for the sum of the relevant individual tasks – the first three tasks in this particular case. Why? Essentially because a Monte Carlo run in which the first three tasks tasks take as long as their (individual) 90% likely completion times is highly unlikely. Exercise: use the worksheet to estimate how likely this is.
There’s much more that can be learnt from the CDF. For example, it also tells us that the greatest uncertainty in the estimate is in the 5 day period from ~14 to 19 days because that’s the region in which the probability changes most rapidly as a function of elapsed time. Of course, the exact numbers are dependent on the assumed form of the distribution. I’ll say more about this in the final section.
To close this section, I’d like to reprise a point I mentioned earlier: that uncertainty is a shape, not a number. Monte Carlo simulations make the uncertainty in estimates explicit and can help you frame your estimates in the language of probability…and using a tool like Excel can help you explain these to nontechnical people like your manager.
Closing remarks
We’ve covered a fair bit of ground: starting from general observations about how long a task takes, saw how to construct simple probability distributions and then combine these using Monte Carlo simulation. Before I close, there are a few general points I should mention for completeness…and as warning.
First up, it should be clear that the estimates one obtains from a simulation depend critically on the form and parameters of the distribution used. The parameters are essentially an empirical matter; they should be determined using historical data. The form of the function, is another matter altogether: as pointed out in an earlier section, one cannot determine the shape of a function from a finite number of data points. Instead, one has to focus on the properties that are important. For example, is there a small but finite chance that a task can take an unreasonably long time? If so, you may want to use a lognormal distribution…but remember, you will need to find a sensible way to estimate the distribution parameters from your historical data.
Second, you may have noted from the probability distribution curve (Figure 11) that despite the skewed distributions of the individual tasks, the distribution of the overall completion time is somewhat symmetric with a minimum of ~9 days, most likely time of ~16 days and maximum of 24 days. It turns out that this is a general property of distributions that are generated by adding a large number of independent probabilistic variables. As the number of variables increases, the overall distribution will tend to the ubiquitous Normal distribution.
The assumption of independence merits a closer look. In the case it hand, it implies that the completion times for each task are independent of each other. As most project managers will know from experience, this is rarely the case: in real life, a task that is delayed will usually have knockon effects on subsequent tasks. One can easily incorporate such dependencies in a Monte Carlo simulation. A formal way to do this is to introduce a nonzero correlation coefficient between tasks as I have done here. A simpler and more realistic approach is to introduce conditional intertask dependencies As an example, one could have an intertask delay that kicks in only if the predecessor task takes more than 80% of its maximum time.
Thirdly, you may have wondered why I used 10,000 trials: why not 100, or 1000 or 20,000. This has to do with the tricky issue of convergence. In a nutshell, the estimates we obtain should not depend on the number of trials used. Why? Because if they did, they’d be meaningless.
Operationally, convergence means that any predicted quantity based on aggregates should not vary with number of trials. So, if our Monte Carlo simulation has converged, our prediction of 19.5 days for the 90% likely completion time should not change substantially if I increase the number of trials from ten to twenty thousand. I did this and obtained almost the same value of 19.5 days. The average and median completion times (shown in cell Q3 and Q4 of Sheet 1) also remained much the same (16.8 days). If you wish to repeat the calculation, be sure to change the formulas on all three sheets appropriately. I was lazy and hardcoded the number of trials. Sorry!
Finally, I should mention that simulations can be usefully performed at a higher level than individual tasks. In their highlyreadable book, Waltzing With Bears: Managing Risk on Software Projects, Tom De Marco and Timothy Lister show how Monte Carlo methods can be used for variables such as velocity, time, cost etc. at the project level as opposed to the task level. I believe it is better to perform simulations at the lowest possible level, the main reason being that it is easier, and less errorprone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on.
There are a few more things I could say about the usefulness of the generated distribution functions and Monte Carlo in general, but they are best relegated to a future article. This one is much too long already and I think I’ve tested your patience enough. Thanks so much for reading, I really do appreciate it and hope that you found it useful.
Acknowledgement: My thanks to Peter Holberton for pointing out a few typographical and coding errors in an earlier version of this article. These have now been fixed. I’d be grateful if readers could bring any errors they find to my attention.