Eight to Late

Sensemaking and Analytics for Organizations

Tackling the John Smith Problem – deduplicating data via fuzzy matching in R

with 2 comments

Last week I attended a CRM  & data user group meeting for not-for-profits (NFPs), organized by my friend Yael Wasserman from Mission Australia. Following a presentation from a vendor, we broke up into groups and discussed common data quality issues that NFPs (and dare I say most other organisations) face. Number one on the list was the vexing issue of duplicate constituent (donor) records – henceforth referred to as dupes. I like to call this the John Smith Problem as it is likely that a typical customer database in a country with a large Anglo population is likely to have a fair number of records for customers with that name.  The problem is tricky because one has to identify John Smiths who appear to be distinct in the database but are actually the same person, while also ensuring that one does not inadvertently merge two distinct John Smiths.

The John Smith problem is particularly acute for NFPs as much of their customer data comes in either via manual data entry or bulk loads with less than optimal validation. To be sure, all the NFPs represented at the meeting have some level of validation on both modes of entry, but all participants admitted that dupes tend to sneak in nonetheless…and at volumes that merit serious attention.  Yael and his team have had some success in cracking the dupe problem using SQL-based matching of a combination of fields such as first name, last name and address or first name, last name and phone number and so on. However, as he pointed out, this method is limited because:

  1. It does not allow for typos and misspellings.
  2. Matching on too few fields runs the risk of false positives – i.e. labelling non-dupes as dupes.

The problems arise because SQL-based matching requires  one to pre-specify match patterns. The solution is straightforward: use fuzzy matching instead. The idea behind fuzzy matching is simple:  allow for inexact matches, assigning each match a similarity score ranging from 0 to 1 with 0 being complete dissimilarity and 1 being a perfect match. My primary objective in this article is to show how one can make headway with the John Smith problem using the fuzzy matching capabilities available in R.

A bit about fuzzy matching

Before getting down to fuzzy matching, it is worth a brief introduction on how it works. The basic idea is simple: one has to generalise the notion of a match from a binary “match” / “no match” to allow for partial matching. To do this, we need to introduce the notion of an edit distance, which is essentially the minimum number of operations required to transform one string into another. For example, the edit distance between the strings boy and bay is 1: there’s only one edit required to transform one string to the other. The Levenshtein distance is the most commonly used edit distance. It is essentially, “the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.”

A variant called the  Damerau-Levenshtein distance, which additionally allows for the transposition of two adjacent characters (counted as one operation, not two) is found to be more useful in practice.  We’ll use an implementation of this called the optimal string alignment (osa) distance. If you’re interested in finding out more about osa, check out the Damerau-Levenshtein article linked to earlier in this paragraph.

Since longer strings will potentially have larger numeric distances between them, it makes sense to normalise the distance to a value lying between 0 and 1. We’ll do this by dividing the calculated osa distance by the length of the larger of the two strings . Yes, this is crude but, as you will see, it works reasonably well. The resulting number is a normalised measure of the dissimilarity between the two strings. To get a similarity measure we simply subtract the dissimilarity from 1. So, a normalised dissimilarity of 1 translates to similarity score of 0 – i.e. the strings are perfectly dissimilar.  I hope I’m not belabouring the point; I just want to make sure it is perfectly clear before going on.

Preparation

In what follows, I assume you have R and RStudio installed. If not, you can access the software here and here for Windows and here for Macs; installation for both products is usually quite straightforward.

You may also want to download the Excel file many_john_smiths which contains records for ten fictitious John Smiths. At this point I should affirm that as far as the dataset is concerned, any resemblance to actual John Smiths, living or dead, is purely coincidental! Once you have downloaded the file you will want to open it in Excel and examine the records and save it as a csv file in your R working directory (or any other convenient place)  for processing in R.

As an aside, if you have access to a database, you may also want to load the file into a table called many_john_smiths and run the following dupe-detecting SQL statement:


select * from many_john_smiths t1
where exists
(select 'x' from many_john_smiths t2
where
t1.FirstName=t2.FirstName
and
t1.LastName=t2.LastName
and
t1.AddressPostcode=t2.AddressPostcode
and
t1.CustomerID <> t2.CustomerID)

You may also want to try matching on other column combinations such as First/Last Name and AddressLine1 or First/Last Name and AddressSuburb for example. The limitations of column-based exact matching will be evident immediately. Indeed,  I have deliberately designed the records to highlight some of the issues associated with dirty data: misspellings, typos, misheard names over the phone etc. A quick perusal of the records will show that there are probably two distinct John Smiths in the list. The problem is to quantify this observation. We do that next.

Tackling the John Smith problem using R

We’ll use the following libraries: stringdist, stringi and rockchalk.  The first library, stringdist, contains a bunch of string distance functions, we’ll use stringdistmatrix() which returns a matrix of pairwise string distances (osa by default) when  passed a vector of strings; stringi has a number of string utilities from which we’ll use str_length(), which returns the length of string; rockchalk contains a useful function vech2mat() which casts a half-vector to a triangular matrix (this will make more sense when you see it in action).

OK, so on to the code. The first step is to load the required libraries:

#load libraries
library("stringdist")
library("stringr")
library("rockchalk")

We then read in the data, ensuring that we override the annoying default behaviour of R, which is to convert strings to categorical variables – we want strings to remain strings!

#read data, taking care to ensure that strings remain strings
df <- read.csv("many_john_smiths.csv",stringsAsFactors = F)
#examine dataframe
str(df)

The output from str(df) (not shown)  indicates that all columns barring CustomerID are indeed strings (i.e. type=character).

The next step is to find the length of each row:

#find length of string formed by each row (excluding title)
rowlen <- str_length(paste0(df$FirstName,df$LastName,df$AddressLine1,
df$AddressPostcode,df$AddressSuburb,df$Phone))
#examine row lengths
rowlen
> [1] 41 43 39 42 28 41 42 42 42 43

Note that I have excluded the Title column as I did not think it was relevant to determining duplicates.

Next we find the distance between every pair of records in the dataset. We’ll use the stringdistmatrix()function mentioned earlier:


#stringdistmatrix - finds pairwise osa distance between every pair of elements in a
#character vector
d <- stringdistmatrix(paste0(df$FirstName,df$LastName,df$AddressLine1,
df$AddressPostcode,df$AddressSuburb,df$Phone))

stringdistmatrix() returns an object of type dist (distance), so we’ll cast it into a matrix object for use later. We’ll also set the diagonal entries to 0 (since we know that the distance between a string and itself is zero):

#cast d as a proper matrix
d_matrix <- as.matrix(d)
#set diagonals to 0 - distance between a string and itself is 0
diag(d_matrix) <- 0

For reasons that will become clear later, it is convenient to normalise the distance – i.e. scale it to a number that lies between 0 and 1. We’ll do this by dividing the distance between two strings by the length of the longer string. We’ll use the nifty base R function combn() to compute the maximum length for every pair of strings:

#find the length of the longer of two strings in each pair
pwmax <- combn(rowlen,2,max,simplify = T)

The first argument is the vector from which combinations are to be generated, the second is the group size (2, since we want pairs) and the third argument indicates whether or not the result should be returned as an array (simplify=T) or list (simplify=F). The returned object, pwmax, is a one-dimensional array containing the pairwise maximum lengths. We convert this to a proper matrix using the vech2mat() function from rockchalk.

#convert the resulting vector to a matrix, the diagonal entries should be the
#respective rowlength
pmax_matrix <- vech2mat(pwmax,diag=rowlen)

Now we divide the distance matrix by the maximum length matrix in order to obtain a normalised distance:

#normalise distance
norm_dist_matrix <- d_matrix/pmax_matrix

The normalised distance lies between 0 and 1 (check this!) so we can define similarity as 1 minus distance:

#similarity = 1 - distance
sim_matrix <-round(1-norm_dist_matrix,2)
sim_matrix
1 2 3 4 5 6 7 8 9 10
1 1.00 0.84 0.76 0.64 0.54 0.46 0.52 0.76 0.55 0.60
2 0.84 1.00 0.70 0.51 0.40 0.51 0.47 0.70 0.49 0.49
3 0.76 0.70 1.00 0.43 0.33 0.32 0.38 0.60 0.55 0.42
4 0.64 0.51 0.43 1.00 0.64 0.71 0.79 0.52 0.50 0.70
5 0.54 0.40 0.33 0.64 1.00 0.56 0.50 0.45 0.43 0.49
6 0.46 0.51 0.32 0.71 0.56 1.00 0.67 0.40 0.31 0.56
7 0.52 0.47 0.38 0.79 0.50 0.67 1.00 0.48 0.45 0.63
8 0.76 0.70 0.60 0.52 0.45 0.40 0.48 1.00 0.48 0.49
9 0.55 0.49 0.55 0.50 0.43 0.31 0.45 0.48 1.00 0.44
10 0.60 0.49 0.42 0.70 0.49 0.56 0.63 0.49 0.44 1.00
#write out the similarity matrix
write.csv(sim_matrix,file="similarity_matrix.csv")

The similarity matrix looks quite reasonable: you can, for example, see that records 1 and 2 (similarity score=0.84) are quite similar while records 1 and 6 are quite dissimilar (similarity score=0.46).  Now let’s extract some results more systematically. We’ll do this by printing out the top 5 non-diagonal similarity scores and the associated records for each of them. This needs a bit of work. To start with, we note that the similarity matrix (like the distance matrix) is symmetric so we’ll convert it into an upper triangular matrix to avoid double counting. We’ll also set the diagonal entries to 0 to avoid comparing a record with itself:

#convert to upper triangular to prevent double counting
sim_matrix[lower.tri(sim_matrix)] <- 0
#set diagonals to zero to avoid comparing row to itself
diag(sim_matrix) <- 0

Next we create a function that returns the n largest similarity scores and their associated row and column number – we’ll need the latter to identify the pair of records that are associated with each score:

#adapted from:
#https://stackoverflow.com/questions/32544566/find-the-largest-values-on-a-matrix-in-r
nlargest <- function(m, n) {
res <- order(m, decreasing = T)[seq_len(n)];
pos <- arrayInd(res, dim(m), useNames = TRUE);
list(values = m[res],
position = pos)
}

The function takes two arguments: a matrix m and a number n indicating the top n scores to be returned. Let’s set this number to 5 – i.e. we want the top 5 scores and the associated record indexes. We’ll store the output of nlargest in the variable sim_list:

top_n <- 5
sim_list <- nlargest(sim_matrix,top_n)

Finally, we loop through sim_list printing out the scores and associated records as we go along:

for (i in 1:top_n){
rec <- as.character(df[sim_list$position[i],])
sim_rec <- as.character(df[sim_list$position[i+top_n],])
cat("score: ",sim_list$values[i],"\n")
cat("record 1: ",rec,"\n")
cat ("record 2: ",sim_rec,"\n\n")
}
score: 0.84
record 1: 1 John Smith Mr 12 Acadia Rd Burnton 9671 1234 5678
record 2: 2 Jhon Smith Mr 12 Arcadia Road Bernton 967 1233 5678


score: 0.79
record 1: 4 John Smith Mr 13 Kynaston Rd Burnton 9671 34561234
record 2: 7 Jon Smith Mr. 13 Kinaston Rd Barnston 9761 36451223


score: 0.76
record 1: 1 John Smith Mr 12 Acadia Rd Burnton 9671 1234 5678
record 2: 3 J Smith Mr. 12 Acadia Ave Burnton 867`1 1233 567


score: 0.76
record 1: 1 John Smith Mr 12 Acadia Rd Burnton 9671 1234 5678
record 2: 8 John Smith Dr 12 Aracadia St Brenton 9761 12345666


score: 0.71
record 1: 4 John Smith Mr 13 Kynaston Rd Burnton 9671 34561234
record 2: 6 John S Dr. 12 Kinaston Road Bernton 9677 34561223

As you can see, the method correctly identifies close matches: there appear to be 2 distinct records (1 and 4) – and possibly more, depending on where one sets the similarity threshold. I’ll leave you to explore this further on your own.

The John Smith problem in real life

As a proof of concept, I ran the following SQL on a real CRM database hosted on SQL Server:

select
FirstName+LastName,
count(*)
from
TableName
group by
FirstName+LastName
having
count(*)>100
order by
count(*) desc

I was gratified to note that John Smith did indeed come up tops – well over 200 records. I suspected there were a few duplicates lurking within, so I extracted the records and ran the above R code (with a few minor changes). I found there indeed were some duplicates! I also observed that the code ran with no noticeable degradation despite the dataset having well over 10 times the number of records used in the toy example above. I have not run it for larger datasets yet, but I suspect one will run into memory issues when the number of records gets into the thousands. Nevertheless, based on my experimentation thus far, this method appears viable for small datasets.

The problem of deduplicating large datasets is left as an exercise for motivated readers 😛

Wrapping up

Often organisations will turn to specialist consultancies to fix data quality issues only to find that their work, besides being quite pricey, comes with a lot of caveats and cosmetic fixes that do not address the problem fully.  Given this, there is a case to be made for doing as much of the exploratory groundwork as one can so that one gets a good idea of what can be done and what cannot. At the very least, one will then be able to keep one’s consultants on their toes. In my experience, the John Smith problem ranks right up there in the list of data quality issues that NFPs and many other organisations face. This article is intended as a starting point to address this issue using an easily available and cost effective technology.

Finally,  I should reiterate that the approach discussed here is just one of many possible and is neither optimal nor efficient.  Nevertheless, it works quite well on small datasets, and is therefore offered here as a starting point for your own attempts at tackling the problem. If you come up with something better – as I am sure you can – I’d greatly appreciate your letting me know via the contact page on this blog or better yet, a comment.

Acknowledgements:

I’m indebted to Homan Zhao and Sree Acharath for helpful conversations on fuzzy matching.  I’m also grateful to  all those who attended the NFP CRM and Data User Group meetup that was held earlier this month – the discussions at that meeting inspired this piece.

Written by K

October 9, 2019 at 8:49 pm

Posted in Data Analytics, Data Science, R

Tagged with

3 or 7, truth or trust

leave a comment »

“It is clear that ethics cannot be articulated.” – Ludwig Wittgenstein

Over the last few years I’ve been teaching and refining a series of lecture-workshops on Decision Making Under Uncertainty. Audiences include data scientists and mid-level managers working in corporates and public service agencies. The course is based on the distinction between uncertainties in which the variables are known and can be quantified versus those in which the variables are not known upfront and/or are hard to quantify.

Before going any further, it is worth explaining the distinction via a couple of examples:

An example of the first type of uncertainty is project estimation. A project has an associated time and cost, and although we don’t know what their values are upfront, we can estimate them if we have the right data.  The point to note is this: because such problems can be quantified, the human brain tends to deal with them in a logical manner.

In contrast, business strategy is an example of the second kind of uncertainty. Here we do not know what the key variables are upfront. Indeed we cannot, because different stakeholders will perceive different aspects of a strategy to be paramount depending on their interests – consider, for example, the perspective of a CFO versus that of a CMO. Because of these differences, one cannot make progress on such problems until agreement has been reached on what is important to the group as a whole.  The point to note here is that since such problems involve contentious issues, our reactions to them  tend to be emotional rather than logical.

The difference between the two types of uncertainty is best conveyed experientially, so I have a few in-class activities aimed at doing just that. One of them is an exercise I call “3 or 7“, in which I give students a sheet with the following printed on it:

Circle either the number 3 or 7 below depending on whether you want 3 marks or 7 marks added to your Assignment 2 final mark. Yes, this offer is for real, but there a catch: if more than 10% of the class select 7, no one gets anything.

Write your student ID on the paper so that Kailash can award you the marks. Needless to say, your choice will remain confidential, no one (but Kailash) will know what you have selected.

3              7

Prior to handing out the sheet, I tell them that they:

  • should sit far enough apart so that they can’t see what their neighbours choose,
  • are not allowed to communicate their choices to others until the entire class has turned their sheets.

Before reading any further you may want to think about what typically happens.

–x–

Many readers would have recognized this exercise as a version of the Prisoner’s Dilemma and, indeed, many students in my classes recognize this too.   Even so, there are always enough of “win at the cost of others” types in the room who ensure that I don’t have to award any extra marks. I’ve run the exercise about 10 times, often with groups comprised of highly collaborative individuals who work well together. Despite that,15-20% of the class ends up opting for 7.

It never fails to surprise me that, even in relatively close-knit groups, there are invariably a number of individuals who, if given a chance to gain at the expense of their colleagues, will not hesitate to do so providing their anonymity is ensured.

–x–

Conventional management thinking deems that any organisational activity involving several people has to be closely supervised. Underlying this view is the assumption that individuals involved in the activity will, if left unsupervised, make decisions based on self-interest rather than the common good (as happens in the prisoner’s dilemma game). This assumption finds justification in rational choice theory, which predicts that individuals will act in ways that maximise their personal benefit without any regard to the common good. This view is exemplified in 3 or 7 and, at a societal level, in the so-called Tragedy of the Commons, where individuals who have access to a common resource over-exploit it,  thus depleting the resource entirely.

Fortunately, such a scenario need not come to pass: the work of Elinor Ostrom, one of the 2009 Nobel prize winners for Economics, shows that, given the right conditions, groups can work towards the common good even if it means forgoing personal gains.

Classical economics assumes that individuals’ actions are driven by rational self-interest – i.e. the well-known “what’s in it for me” factor. Clearly, the group will achieve much better results as a whole if it were to exploit the resource in a cooperative way. There are several real-world examples where such cooperative behaviour has been successful in achieving outcomes for the common good (this paper touches on some). However, according to classical economic theory, such cooperative behaviour is simply not possible.

So the question is: what’s wrong with rational choice theory?  A couple of things, at least:

Firstly, implicit in rational choice theory is the assumption that individuals can figure out the best choice in any given situation.  This is obviously incorrect. As Ostrom has stated in one of her papers:

Because individuals are boundedly rational, they do not calculate a complete set of strategies for every situation they face. Few situations in life generate information about all potential actions that one can take, all outcomes that can be obtained, and all strategies that others can take.

Instead, they use heuristics (experienced-based methods), norms (value-based techniques) and rules (mutually agreed regulations) to arrive at “good enough” decisions.  Note that Ostrom makes a distinction between norms and rules, the former being implicit (unstated) rules, which are determined by the cultural attitudes and values)

Secondly, rational choice theory assumes that humans behave as self-centred, short-term maximisers. Such theories work in competitive situations such as the stock-market but not in situations in which collective action is called for, such as the prisoners dilemma.

Ostrom’s work essentially addresses the limitations of rational choice theory by outlining how individuals can work together to overcome self-interest.

–x–

In a paper entitled, A Behavioral Approach to the Rational Choice Theory of Collective Action, published in 1998, Ostrom states that:

…much of our current public policy analysis is based on an assumption that rational individuals are helplessly trapped in social dilemmas from which they cannot extract themselves without inducement or sanctions applied from the outside. Many policies based on this assumption have been subject to major failure and have exacerbated the very problems they were intended to ameliorate. Policies based on the assumptions that individuals can learn how to devise well-tailored rules and cooperate conditionally when they participate in the design of institutions affecting them are more successful in the field…[Note:  see this book by Baland and Platteau, for example]

Since rational choice theory aims to maximise individual gain,  it does not work in situations that demand collective action – and Ostrom presents some very general evidence to back this claim.  More interesting than the refutation of rational choice theory, though, is Ostrom’s discussion of the ways in which individuals “trapped” in social dilemmas end up making the right choices. In particular she singles out two empirically grounded ways in which individuals work towards outcomes that are much better than those offered by rational choice theory. These are:

Communication: In the rational view, communication makes no difference to the outcome.  That is, even if individuals make promises and commitments to each other (through communication), they will invariably break these for the sake of personal gain …or so the theory goes. In real life, however, it has been found that opportunities for communication significantly raise the cooperation rate in collective efforts (see this paper abstract or this one, for example). Moreover, research shows that face-to-face is far superior to any other form of communication, and that the main benefit achieved through communication is exchanging mutual commitment (“I promise to do this if you’ll promise to do that”) and increasing trust between individuals. It is interesting that the main role of communication is to enhance or reinforce the relationship between individuals rather than to transfer information.  This is in line with the interactional theory of communication.

Innovative Governance:  Communication by itself may not be enough; there must be consequences for those who break promises and commitments. Accordingly, cooperation can be encouraged by implementing mutually accepted rules for individual conduct, and imposing sanctions on those who violate them. This effectively amounts to designing and implementing novel governance structures for the activity. Note that this must be done by the group; rules thrust upon the group by an external authority are unlikely to work.

Of course, these factors do not come into play in artificially constrained and time-bound scenarios like 3 or 7.  In such situations, there is no opportunity or time to communicate or set up governance structures. What is clear, even from the simple 3 or 7 exercise,  is that these are required even for groups that appear to be close-knit.

Ostrom also identifies three core relationships that promote cooperation. These are:

Reciprocity: this refers to a family of strategies that are based on the expectation that people will respond to each other in kind – i.e. that they will do unto others as others do unto them.  In group situations, reciprocity can be a very effective means to promote and sustain cooperative behaviour.

Reputation: This refers to the general view of others towards a person. As such, reputation is a part of how others perceive a person, so it forms a part of the identity of the person in question. In situations demanding collective action, people might make judgements on a person’s reliability and trustworthiness based on his or her reputation.’

Trust: Trust refers to expectations regarding others’ responses in situations where one has to act before others. And if you think about it, everything else in Ostrom’s framework is ultimately aimed at engendering or – if that doesn’t work – enforcing trust.

–x—

In an article on ethics and second-order cybernetics, Heinz von Foerster tells the following story:

I have a dear friend who grew up in Marrakech. The house of his family stood on the street that divide the Jewish and the Arabic quarter. As a boy he played with all the others, listened to what they thought and said, and learned of their fundamentally different views. When I asked him once, “Who was right?” he said, “They are both right.”

“But this cannot be,” I argued from an Aristotelian platform, “Only one of them can have the truth!”

“The problem is not truth,” he answered, “The problem is trust.”

For me, that last line summarises the lesson implicit in the admittedly artificial scenario of 3 or 7. In our search for facts and decision-making frameworks we forget the simple truth that in many real-life dilemmas they matter less than we think. Facts and  frameworks cannot help us decide on ambiguous matters in which the outcome depends on what other people do.  In such cases the problem is not truth; the problem is trust.  From your own experience it should be evident it is impossible convince others of your trustworthiness by assertion, the only way to do so is by behaving in a trustworthy way. That is, by behaving ethically rather than talking about it, a point that is squarely missed by so-called business ethics classes.

Yes,  it is clear that ethics cannot be articulated.

Notes:

  1. Portions of this article are lightly edited sections from a 2009 article that I wrote on Ostrom’s work and its relevance to project management.
  2.  Finally, an unrelated but important matter for which I seek your support for a common good: I’m taking on the 7 Bridges Walk to help those affected by cancer. Please donate via my 7 Bridges fundraising page if you can . Every dollar counts; all funds raised will help Cancer Council work towards the vision of a cancer free future.

Written by K

September 18, 2019 at 8:28 pm

The Turing Inversion – an #AI fiction

leave a comment »

…Since people have to continually understand the uncertain, ambiguous, noisy speech of others, it seems they must be using something like probabilistic reasoning…” – Peter Norvig

“It seems that the discourse of nonverbal communication is precisely concerned with matters of relationship – love, hate, respect, fear, dependency, etc.—between self and others or between self and environment, and that the nature of human society is such that falsification of this discourse rapidly becomes pathogenic” – Gregory Bateson

 

It was what, my fourth…no, fifth visit in as many weeks; I’d been there so often, I felt like a veteran employee who’s been around forever and a year. I wasn’t complaining, but it was kind of ironic that the selection process for a company that claimed to be about “AI with heart and soul” could be so without either. I’d persisted because it offered me the best chance in years of escaping from the soul-grinding environs of a second-rate university. There’s not much call outside of academia for anthropologists with a smattering of tech expertise, so when I saw the ad that described me to a T, I didn’t hesitate.

So there I was for the nth time in n weeks.

They’d told me they really liked me, but needed to talk to me one last time before making a decision. It would be an informal chat, they said, no technical stuff. They wanted to understand what I thought, what made me tick. No, no psychometrics they’d assured me, just a conversation. With whom they didn’t say, but I had a sense my interlocutor would be one of their latest experimental models.

–x–

She was staring at the screen with a frown of intense concentration, fingers drumming a complex tattoo on the table. Ever since the early successes of Duplex with its duplicitous “um”s and “uh”s, engineers had learnt that imitating human quirks was half the trick to general AI.  No intelligence required, only imitation.

I knocked on the open door gently.

She looked up, frown dissolving into a smile. Rising from her chair, she extended a hand in greeting. “Hi, you must be Carlos, she said.  I’m Stella. Thanks for coming in for another chat. We really do appreciate it.”

She was disconcertingly human.

“Yes, I’m Carlos. Good to meet you Stella,” I said, mustering a professional smile.  “Thanks for the invitation.”

“Please take a seat. Would you like some coffee or tea?”

“No thanks.” I sat down opposite her.

“Let me bring up your file before we start,” she said, fingers dancing over her keyboard.  “Incidentally, have you read the information sheet HR sent you?”

“Yes, I have.”

“Do you have any questions about the role or today’s chat?”

“No I don’t at the moment, but may have a few as our conversation proceeds.”

“Of course,” she said, flashing that smile again.

Much of the early conversation was standard interview fare: work history, what I was doing in my current role and how it was relevant to the job I had applied for etc.  Though she was impressively fluent, her responses were were well within the capabilities of the current state of art. Smile notwithstanding, I reckoned she was probably an AI.

Then she asked, “as an anthropologist, how do you think humans will react to AIs that are conversationally indistinguishable from humans?”

“We are talking about a hypothetical future,” I replied warily, “…we haven’t got to the point of indistinguishability yet.”

“Really?”

“Well… yes…at least for now.”

“OK, if you say so,” she said enigmatically, “let’s assume you’re right and treat that as a question about a ‘hypothetical’ future AI.”

“Hmm, that’s a difficult one, but let me try…most approaches to conversational AI work by figuring out an appropriate response using statistical methods. So, yes, assuming the hypothetical AI has a vast repository of prior conversations and appropriate algorithms, it could – in principle – be able to converse flawlessly.” It was best to parrot the party line, this was an AI company after all.

She was having none of that. “I hear the ‘but’ in your tone,” she said, “why don’t you tell me what you really think?”

“….Well there’s much more to human communication than words,” I replied, “more to conversations than what’s said. Humans use non-verbal cues such as changes in tone or facial expressions and gestures…”

“Oh, that’s a solved problem,” she interrupted with a dismissive gesture, “we’ve come a very long way since the primitive fakery of Duplex.”

“Possibly, but there’s more. As you probably well know, much of human conversation is about expressing emotions and…”

“…and you think AIs will not be able to do that?” she queried, looking at me squarely, daring me to disagree.

I was rattled but could not afford to show it. “Although it may be possible to design conversational AIs that appear to display emotion via, say, changes in tone, they won’t actually experience those emotions,” I replied evenly.

“Who is to say what another experiences? An AI that sounds irritated, may actually be irritated,” she retorted, sounding more than a little irritated herself.

“I’m not sure I can accept that,” I replied, “A machine may learn to display the external manifestation of a human emotion, but it cannot actually experience the emotion in the same way a human does. It is simply not wired to do that.”

“What if the wiring could be worked in?”

“It’s not so simple and we are a long way from achieving that, besides…”

“…but it could be done in principle” she interjected.

“Possibly, but I don’t see the point of it.  Surely…”

“I’m sorry” she said vehemently, “I find your attitude incomprehensible. Why should machines not be able display, or indeed, even experience emotions?  If we were talking about humans, you would be accused of bias!”

Whoa, a de-escalation was in order. “I’m sorry,” I said, “I did not mean to offend.”

She smiled that smile again. “OK, let’s leave the contentious issue of emotion aside and go back to the communicative aspect of language. Would you agree that AIs are close to achieving near parity with humans in verbal communication?”

“Perhaps, but only in simple, transactional conversations,” I said, after a brief pause. “Complex discussions – like say a meeting to discuss a business strategy – are another matter altogether.”

“Why?”

“Well, transactional conversations are solely about conveying information. However, more complex conversations – particularly those involving people with different views – are more about building relationships. In such situations, it is more important to focus on building trust than conveying information. It is not just a matter of stating what one perceives to be correct or true because the facts themselves are contested.”

“Hmm, maybe so, but such conversations are the exception not the norm. Most human exchanges are transactional.”

“Not so. In most human interactions, non-verbal signals like tone and body language matter more than words. Indeed, it is possible to say something in a way that makes it clear that one actually means the opposite. This is particularly true with emotions. For example, if my spouse asks me how I am and I reply ‘I’m fine’ in a tired voice, I make it pretty clear that I’m anything but.  Or when a boy tells a girl that he loves her, she’d do well to pay more attention to his tone and gestures than his words. The logician’s dream that humans will communicate unambiguously through language is not likely to be fulfilled.” I stopped abruptly, realising I’d strayed into contentious territory again.

“As I recall Gregory Bateson alluded to that in one of his pieces,” she responded, that disconcerting smile again.

“Indeed he did! I’m impressed that you made the connection.”

“No you aren’t,” she said, smile tightening, “It was obvious from the start that you thought I was an AI, and an AI would make the connection in a flash.”

She had taken offence again. I stammered an apology which she accepted with apparent grace.

The rest of the conversation was a blur; so unsettled was I by then.

–x–

“It’s been a fascinating conversation, Carlos,” she said, as she walked me out of the office.

“Thanks for your time,” I replied, “and my apologies again for any offence caused.”

“No offence taken,” she said, “it is part of the process. We’ll be in touch shortly.” She waved goodbye and turned away.

Silicon or sentient, I was no longer sure. What mattered, though, was not what I thought of her but what she thought of me.

–x–

References:

  1. Norvig, P., 2017. On Chomsky and the two cultures of statistical learning. In Berechenbarkeit der Welt? (pp. 61-83). Springer VS, Wiesbaden. Available online at: http://norvig.com/chomsky.html
  2.  Bateson, G., 1968. Redundancy and coding. Animal communication: Techniques of study and results of research, pp.614-626. Reprinted in Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press, 2000, p, 418.

Written by K

May 1, 2019 at 10:13 pm

Posted in AI Fiction

Seven Bridges revisited – further reflections on the map and the territory

with 2 comments

The  Seven Bridges Walk is an annual fitness and fund-raising event organised by the Cancer Council of New South Wales. The picturesque 28 km circuit weaves its way through a number of waterfront suburbs around Sydney Harbour and takes in some spectacular views along the way.  My friend John and I did the walk for the first time in 2017. Apart from thoroughly enjoying the experience, there was  another, somewhat unexpected payoff: the walk evoked some thoughts on project management and the map-territory relationship which I subsequently wrote up in a post on this blog.

Figure 1:The map, the plan

We enjoyed the walk so much that we decided to do it again in 2018. Now, it is a truism that one cannot travel exactly the same road twice. However, much is made of the repeatability of certain kinds of experiences. For example, the discipline of project management is largely predicated on the assumption that projects are repeatable.  I thought it would be interesting to see how this plays out in the case of a walk along a well-defined route, not the least because it is in many ways akin to a repeatable project.

To begin with, it is easy enough to compare the weather conditions on the two days: 29 Oct 2017 and 28 Oct 2018. A quick browse of this site gave me the data as I was after (Figure 2).

Figure 2: Weather on 29 Oct 2017 and 28 Oct 2018

The data supports our subjective experience of the two walks. The conditions in 2017 were less than ideal for walking: clear and uncomfortably warm with a hot breeze from the north.  2018 was considerably better: cool and overcast with a gusty south wind – in other words, perfect walking weather. Indeed, one of the things we commented on the second time around was how much more pleasant it was.

But although weather conditions matter, they tell but a part of the story.

On the first walk, I took a number of photographs at various points along the way. I thought it would be interesting to take photographs at the same spots, at roughly the same time as I did the last time around, and compare how things looked a year on. In the next few paragraphs I show a few of these side by side (2017 left, 2018 right) along with some comments.

We started from Hunters Hill at about 7:45 am as we did on our first foray, and took our first photographs at Fig Tree Bridge, about a kilometre from the starting point.

Figure 3: Lane Cove River from Fig Tree Bridge (2017 Left, 2018 Right)

The purple Jacaranda that captivated us in 2017 looks considerably less attractive the second time around (Figure 3): the tree is yet to flower and what little there is there does not show well in the cloud-diffused light. Moreover, the scaffolding and roof covers on the building make for a much less attractive picture. Indeed, had the scene looked so the first time around, it is unlikely we would have considered it worthy of a photograph.

The next shot (Figure 4), taken not more than a  hundred metres from the previous one, also looks considerably different:  rougher waters and no kayakers in the foreground. Too cold and windy, perhaps?  The weather and wind data in Fig 2 would seem to support that conclusion.

Figure 4: Morning kayakers on the river (2017 Left, 2018 Right)

The photographs in Figure 5 were taken at Pyrmont Bridge  about four hours into the walk. We already know from Figure 4 that it was considerably windier in 2018. A comparison of the flags in the two shots in Figure 5 reveal an additional detail: the wind was from opposite directions in the two years. This is confirmed by the weather information in Figure 2, which also tells us that the wind was from the north in 2017 and the south the following year (which explains the cooler conditions).  We can even get an  approximate temperature: the photographs were taken around 11:30 am both years, and a quick look at Figure 2 reveals that the temperature at noon was about 30 C in 2017 and 18 C in 2018.

Figure 5: Pyrmont Bridge (2017 Left, 2018 Right)

The point about the wind direction and cloud conditions is also confirmed by comparing the photographs in Figure 6, taken at Anzac Bridge, a few kilometres further along the way (see the direction of the flag atop the pylon).

Figure 6: View looking up Anzac Bridge (2017 L, 2018 R)

Skipping over to the final section of the walk, here are a couple of shots I took towards the end: Figure 7 shows a view from Gladesville Bridge and Figure 8 shows one from Tarban Creek Bridge.  Taken together the two confirm some of the things we’ve already noted regarding the weather and conditions for photography.

Figure 7: View from Gladesville Bridge (2017 L, 2018 R)

Further, if you look closely at Figures 7 and 8, you will also see the differences in the flowering stage of the Jacaranda.

Figure 8: View from Tarban Creek Bridge (2017 L, 2018 R)

A detail that I did not notice until John pointed it out is that the the boat at the bottom edge of  both photographs in Fig. 8 is the same one (note the colour of the furled sail)! This was surprising to us, but it should not have been so.  It turns out that boat owners have to apply for private mooring licenses and are allocated positions at which they install a suitable mooring apparatus. Although this is common knowledge for boat owners, it likely isn’t so for others.

The photographs are a visual record of some of the things we encountered  along the way. However, the details in recorded in them have more to do with aesthetics rather the experience – in photography of this kind, one tends to preference what looks good over what happened. Sure, some of the photographs offer hints about the experience but much of this is incidental and indirect. For example,  when taking the photographs in Figures 5 and 6, it was certainly not my intention to record the wind direction. Indeed, that would have been a highly convoluted way to convey information that is directly and more accurately described by the data in Figure 2 . That said, even data has limitations: it can help fill in details such as the wind direction and temperature but it does not evoke any sense of what it was like to be there, to experience the experience, so to speak.

Neither data nor photographs are the stuff memories are made of. For that one must look elsewhere.

–x–

As Heraclitus famously said, one can never step into the same river twice. So it is with walks.  Every experience of a walk is unique; although map remains the same the territory is invariably different on each traverse, even if only subtly so. Indeed, one could say that the territory is defined through one’s experience of it. That experience is not reproducible, there are always differences in the details.

As John Salvatier points out, reality has a surprising amount of detail, much of which we miss because we look but do not see. Seeing entails a deliberate focus on minutiae such as the play of morning light on the river or tree; the cool damp from last night’s rain; changes in the built environment, some obvious, others less so.  Walks are made memorable by precisely such details, but paradoxically  these can be hard to record in a meaningful way.  Factual (aka data-driven) descriptions end up being laundry lists that inevitably miss the things that make the experience memorable.

Poets do a better job. Consider, for instance, Tennyson‘s take on a brook:

“…I chatter over stony ways,
In little sharps and trebles,
I bubble into eddying bays,
I babble on the pebbles.

With many a curve my banks I fret
By many a field and fallow,
And many a fairy foreland set
With willow-weed and mallow.

I chatter, chatter, as I flow
To join the brimming river,
For men may come and men may go,
But I go on for ever….”

One can almost see and hear a brook. Not Tennyson’s, but one’s own version of it.

Evocative descriptions aren’t the preserve of poets alone. Consider the following description of Sydney Harbour, taken from DH Lawrence‘s Kangaroo:

“…He took himself off to the gardens to eat his custard apple-a pudding inside a knobbly green skin-and to relax into the magic ease of the afternoon. The warm sun, the big, blue harbour with its hidden bays, the palm trees, the ferry steamers sliding flatly, the perky birds, the inevitable shabby-looking, loafing sort of men strolling across the green slopes, past the red poinsettia bush, under the big flame-tree, under the blue, blue sky-Australian Sydney with a magic like sleep, like sweet, soft sleep-a vast, endless, sun-hot, afternoon sleep with the world a mirage. He could taste it all in the soft, sweet, creamy custard apple. A wonderful sweet place to drift in….”

Written in 1923, it remains a brilliant evocation of the Harbour even today.

Tennyson’s brook and Lawrence’s Sydney do a better job than photographs or factual description, even though the latter are considered more accurate and objective. Why?  It is because their words are more than mere description: they are stories that convey a sense of what it is like to be there.

–x–

The two editions of the walk covered exactly the same route, but our experiences of the territory on the two instances were very different. The differences were in details that ultimately added up to the uniqueness of each experience.  These details cannot  be captured by maps and visual or written records, even in principle. So although one may gain familiarity with certain aspects of a territory through repetition, each lived experience of it will be unique. Moreover, no two individuals will experience the territory in exactly the same way.

When bidding for projects, consultancies make much of their prior experience of doing similar projects elsewhere. The truth, however, is that although two projects may look identical on paper they will invariably be different in practice.  The map,  as Korzybski famously said, is not the territory.  Even more, every encounter with the territory is different.

All this is not to say that maps (or plans or data) are useless, one needs them as orienting devices. However, one must accept that they offer limited guidance on how to deal with the day-to-day events and occurrences on a project. These tend to be unique because they are highly context dependent. The lived experience of a project is therefore necessarily different from the planned one. How can one gain insight into the former? Tennyson and Lawrence offer a hint: look to the stories told by people who have traversed the territory, rather than the maps, plans and data-driven reports they produce.

Written by K

February 15, 2019 at 8:24 am

Posted in Project Management

Another distant horizon

with 4 comments

It was with a sense of foreboding that I reached for my phone that Sunday morning. I had spoken with him the day before and although he did not say, I could sense he was tired.

“See you in a few weeks,” he said as he signed off, “and I’m especially looking forward to seeing the boys.”

It was not to be. Twelve hours later, a missed call from my brother Kedar and the message:

“Dad passed away about an hour or so ago…”

The rest of the day passed in a blur of travel arrangements and Things that Had to Be Done Before Leaving. My dear wife eased my way through the day.

I flew out that evening.

–x–

A difficult journey home. I’m on a plane, surrounded by strangers. I wonder how many of them are making the journey for similar reasons.

I turn to the inflight entertainment. It does not help. Switching to classical music, I drift in and out of a restless sleep.

About an hour later, I awaken to the sombre tones of Mozart’s Requiem, I cover my head with the blanket and shed a tear silently in the dark.

–x–

Monday morning, Mumbai airport, the waiting taxi and the long final leg home.

Six hours later, in the early evening I arrive to see Kedar, waiting for me on the steps, just as Dad used to.

I hug my Mum, pale but composed. She smiles and enquires about my journey.  “I’m so happy to see you,” she says.

She has never worn white, and does not intend to start now. “Pass me something colourful,” she requests the night nurse, “I want to celebrate his life, not mourn his passing.”

–x–

The week in Vinchurni is a blur of visitors, many of whom have travelled from afar. I’m immensely grateful for the stories they share about my father, deeply touched that many of them consider him a father too.

Mum ensures she meets everyone, putting them at ease when the words don’t come easily. It is so hard to find the words to mourn another’s loss. She guides them – and herself – through the rituals of condolence with a grace that seem effortless. I know it is not.

–x–

Some days later, I sit in his study and the memories start to flow…

A Skype call on my 50th Birthday.

“Many Happy Returns,” he booms, “…and remember, life begins at 50.”

He knew that from his own experience: as noted in this tribute written on his 90th birthday, his best work was done after he retired from the Navy at the ripe young age of 54.

A conversation in Vinchurni, may be twenty years ago. We are talking about politics, the state of India and the world in general. Dad sums up the issue brilliantly:

“The problem,” he says, “is that we celebrate the mediocre. We live in an age of mediocrity.”

Years earlier, I’m faced with a difficult choice. I’m leaning one way, Dad recommends the other.

At one point in the conversation he says, “Son, it’s your choice but I think you are situating your appreciation instead of appreciating the situation.”

He had the uncanny knack of finding the words to get others to reconsider their ill-considered choices.

Five-year-old me on a long walk with Dad and Kedar. We are at a lake in the Nilgiri Hills, where I spent much of my childhood. We collect wood and brew tea on an improvised three-stone fire. Dad’s beard is singed brown as he blows on the kindling. Kedar and I think it’s hilarious and can’t stop laughing.  He laughs with us.

–x–

 “I have many irons in the fire,” he used to say, “and they all tend to heat up at the same time.”

It made for a hectic, fast-paced life in which he achieved a lot by attempting so much more.

This photograph sums up how I will always remember him, striding purposefully towards another distant horizon.

 

Written by K

January 14, 2019 at 8:05 pm

Posted in Personal

Peirce, Holmes and a gold chain – an essay on abductive reasoning

with 8 comments

It has long been an axiom of mine that the little things are infinitely the most important.” – Sir Arthur Conan Doyle (A Case of Identity)

The scientific method is a systematic approach to acquiring and establishing knowledge about how the world works. A scientific investigation typically starts with the formulation of a hypothesis – an educated, evidence-based guess about the mechanism behind the phenomenon being studied – and proceeds by testing how well the hypothesis holds up against experiments designed to disconfirm it.

Although many philosophers have waxed eloquent about the scientific method, very few of them have talked about the process of hypothesis generation. Indeed, most scientists will recognise a good hypothesis when they stumble upon one, but they usually will not be able to say how they came upon it. Hypothesis generation is essentially a creative act that requires a deep familiarity with the phenomenon in question and a spark of intuition. The latter is absolutely essential, a point captured eloquently in the following lines attributed to Einstein:

“…[Man] makes this cosmos and its construction the pivot of his emotional life in order to find in this way the peace and serenity which he cannot find in the narrow whirlpool of personal experience.  The supreme task is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience can reach them…”  – quoted from Zen and The Art of Motorcycle Maintenance by Robert Pirsig.

The American philosopher, Charles Peirce, recognised that hypothesis generation involves a special kind of reasoning, one that enables the investigator to zero in on a small set of relevant facts out of an infinity of possibilities.

Charles Sanders Peirce

As Pierce wrote in one of his papers:

“A given object presents an extraordinary combination of characters of which we should like to have an explanation. That there is any explanation of them is a pure assumption; and if there be, it is [a single] fact which explains them; while there are, perhaps, a million other possible ways of explaining them, if they were not all, unfortunately, false.

A man is found in the streets of New York stabbed in the back. The chief of police might open a directory and put his finger on any name and guess that that is the name of the murderer. How much would such a guess be worth? But the number of names in the directory does not approach the multitude of possible laws of attraction which could have accounted for Kepler’s law of planetary motion and, in advance of verification by predications of perturbations etc., would have accounted for them to perfection. Newton, you will say, assumed that the law would be a simple one. But what was that but piling guess on guess? Surely vastly more phenomena in nature are complex than simple…” – quoted from this paper by Thomas Sebeok.

Peirce coined the term abduction (as opposed to induction or deduction) to refer to the creative act of hypothesis generation.  In the present day, the term is used to the process of justifying hypotheses rather than generating them (see this article for more on the distinction). In the remainder of this piece I will use the term in its Peircean sense.

–x–

Contrary to what is commonly stated, Arthur Conan Doyle’s fictional detective employed abductive rather than deductive methods in his cases.  Consider the following lines, taken from an early paragraph of Sherlock Holmes’ most celebrated exploit, The Adventure of the Speckled Band. We join Holmes in conversation with a lady who has come to him for assistance:

…You have come in by train this morning, I see.”

“You know me, then?”

“No, but I observe the second half of a return ticket in the palm of your left glove. You must have started early, and yet you had a good drive in a dog-cart, along heavy roads, before you reached the station.”

The lady gave a violent start and stared in bewilderment at my companion.

“There is no mystery, my dear madam,” said he, smiling. “The left arm of your jacket is spattered with mud in no less than seven places. The marks are perfectly fresh. There is no vehicle save a dog-cart which throws up mud in that way, and then only when you sit on the left-hand side of the driver.”

“Whatever your reasons may be, you are perfectly correct,” said she. “I started from home before six, reached Leatherhead at twenty past, and came in by the first train to Waterloo…

Notice what Holmes does: he forms hypotheses about what the lady did, based on a selective observation of facts. Nothing is said about why he picks those particular facts – the ticket stub and the freshness / location of mud spatters on the lady’s jacket.  Indeed, as Holmes says in another story, “You know my method. It is founded upon the observation of trifles.”

Abductive reasoning is essentially about recognising which trifles are important.

–x–

I have a gold chain that my mum gave me many years ago.  I’ve worn it for so long that now I barely notice it. The only time I’m consciously aware that I’m wearing the chain is when I finger it around my neck, a (nervous?) habit I have developed over time.

As you might imagine, I’m quite attached to my gold chain. So, when I discovered it was missing a few days ago, my first reaction was near panic,  I felt like a part of me had gone missing.

Since I hardly ever take the chain off, I could not think of any plausible explanation for how I might have lost it. Indeed, the only time I have had to consistently take the chain off is when going in for an X-ray or, on occasion, through airport security.

Where could it have gone?

After mulling over it for a while, the only plausible explanation I could think of is that I had taken it off at airport security when returning from an overseas trip a week earlier, and had somehow forgotten to collect it on the other side.  Realising that it would be near impossible to recover it, I told myself to get used to the idea that it was probably gone for good.

That Sunday, I went for a swim. After doing my laps, I went to the side of the pool for my customary shower. Now, anyone who has taken off a rash vest after a swim will know that it can be a struggle. I suspect this is because water trapped between skin and fabric forms a thin adhesive layer (a manifestation of surface tension perhaps?).  Anyway, I wrestled the garment over my head and it eventually came free with a snap, generating a spray of droplets that gleamed in reflected light.

Later in the day, I was at the movies. For some reason, when coming out of the cinema, I remembered the rash vest and the flash of droplets.  Hmm, I thought, “a gleam of gold….”

…A near forgotten memory: I vaguely recalled a flash of gold while taking of my rash vest in the pool shower some days ago. Was it after my previous swim or the week earlier, I couldn’t be sure. But I distinctly remembered it had bothered me enough to check the floor of the cubicle cursorily. Finding nothing, I had completely forgotten about it and moved on.

Could  it have come off there?

As I thought about it some more, possibility turned to plausibility: I was convinced it was what had happened. Although unlikely I would find it there now,  it was worth a try on the hope that someone had found the chain and turned it in as lost property.

I stopped over at the pool on my way back from the movies and asked at reception.

“A gold chain? Hmm, I think you may be in luck,” he said, “I was doing an inventory of lost property last week and came across a chain. I was wondering why no one had come in to claim something so valuable.”

“You’re kidding,” I said, incredulous. “You mean you have a gold chain?”

“Yeah, and I’m pretty sure it will still be there unless someone else has claimed it,” he replied. “I’ll have a look in the safe. Can you describe it for me?”

I described it down to the brief inscription on the clasp.

“Wait here,” he said, “I’ll be a sec”

It took longer than that but he soon emerged, chain in hand.

I could not believe my eyes; I had given up on ever recovering it. “Thanks, so much” I said fervently, “you won’t believe how much it means to me to have found this.”

“No worries mate,” he said, smiling broadly. “Happy to have helped.”

–x–

Endnote: in case you haven’t read it yet, I recommend you take ten minutes to read Sherlock Holmes’ finest adventure and abductive masterpiece, The Adventure of the Speckled Band.

Written by K

December 4, 2018 at 6:05 am

Learning, evolution and the future of work

leave a comment »

The Janus-headed rise of AI has prompted many discussions about the future of work.  Most, if not all, are about AI-driven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines.  This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automation-proof in the medium-term future.

This begs the question as to what we can do, as individuals, to prepare for a machine-centric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as  my consulting and teaching experiences.

Levels of learning

Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.

So what can one do to prepare for an ambiguous and essentially unpredictable future?

To answer this question, we need to delve into an important, yet oft-overlooked aspect of learning.

A key characteristic of learning is that it is driven by trial and error.  To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight.  Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.

In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:

If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.

Let’s unpack this claim by looking at his proposed classification:

Zero order learning –    Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning.  Such learning is hard-wired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.

First order learning (Learning I) –  Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The  canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.

A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning  cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetition-based process of trial and error.

As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.

Second order learning (Learning II) –  Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions.  Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.

Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.

Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”   Two  aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of  the situation.  Secondly,  once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.

Sensemaking and second order learning

In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail.   Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand.  The greater the diversity, the more complete the exploration of the problem space.

Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem.  As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.”  But this begs the question as to how these alternatives are to be determined.  The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives.  Surfacing these by asking the right questions is the task of sensemaking.

As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:

  • What kind of business are you in?
  • Who are your customers?
  • What’s the competitive landscape?
  • …and so on

Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.

It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context:  Kodak is a well-known example, there are many more.  So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together.  Such co-evolution is as true of natural systems as it is of social ones. As Bateson tells us:

…the evolution of the horse from Eohippus was not a one-sided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.

Indeed, one can think of evolution by natural selection as a process by which nature learns (in a second-order sense).

The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that second-order learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes.  In view of this it seems more practical for organisations to focus on developing what one might call  learning individuals – people who are capable of adapting to changes in their environment by continual learning

Learning to learn

Cliches aside, the modern workplace is characterised by rapid, technology-driven change. It is difficult for an  individual to keep up because one has to:

    • Figure out which changes are significant and therefore worth responding to.
    • Be capable of responding to them meaningfully.

The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence.  One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?

The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important second-order skill of learning to learn has fallen by the wayside.  In reality, though,  it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.

Back to the future of work

When industry complains about new graduates being ill-prepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and  Bateson’s notion of second order learning tells us why:

  • Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
  • Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn

In my experience,  it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my  consulting work.  Two examples may help illustrate how this works in  the classroom:

  • Many analytically-inclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
  • When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a two-step approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms.  The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful co-teacher, Alex, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.

In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:

Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.

The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.

Written by K

July 5, 2018 at 6:05 pm

%d bloggers like this: