Archive for May 2019
The Turing Inversion – an #AI fiction
“…Since people have to continually understand the uncertain, ambiguous, noisy speech of others, it seems they must be using something like probabilistic reasoning…” – Peter Norvig
“It seems that the discourse of nonverbal communication is precisely concerned with matters of relationship – love, hate, respect, fear, dependency, etc.—between self and others or between self and environment, and that the nature of human society is such that falsification of this discourse rapidly becomes pathogenic” – Gregory Bateson
It was what, my fourth…no, fifth visit in as many weeks; I’d been there so often, I felt like a veteran employee who’s been around forever and a year. I wasn’t complaining, but it was kind of ironic that the selection process for a company that claimed to be about “AI with heart and soul” could be so without either. I’d persisted because it offered me the best chance in years of escaping from the soul-grinding environs of a second-rate university. There’s not much call outside of academia for anthropologists with a smattering of tech expertise, so when I saw the ad that described me to a T, I didn’t hesitate.
So there I was for the nth time in n weeks.
They’d told me they really liked me, but needed to talk to me one last time before making a decision. It would be an informal chat, they said, no technical stuff. They wanted to understand what I thought, what made me tick. No, no psychometrics they’d assured me, just a conversation. With whom they didn’t say, but I had a sense my interlocutor would be one of their latest experimental models.
–x–
She was staring at the screen with a frown of intense concentration, fingers drumming a complex tattoo on the table. Ever since the early successes of Duplex with its duplicitous “um”s and “uh”s, engineers had learnt that imitating human quirks was half the trick to general AI. No intelligence required, only imitation.
I knocked on the open door gently.
She looked up, frown dissolving into a smile. Rising from her chair, she extended a hand in greeting. “Hi, you must be Carlos, she said. I’m Stella. Thanks for coming in for another chat. We really do appreciate it.”
She was disconcertingly human.
“Yes, I’m Carlos. Good to meet you Stella,” I said, mustering a professional smile. “Thanks for the invitation.”
“Please take a seat. Would you like some coffee or tea?”
“No thanks.” I sat down opposite her.
“Let me bring up your file before we start,” she said, fingers dancing over her keyboard. “Incidentally, have you read the information sheet HR sent you?”
“Yes, I have.”
“Do you have any questions about the role or today’s chat?”
“No I don’t at the moment, but may have a few as our conversation proceeds.”
“Of course,” she said, flashing that smile again.
Much of the early conversation was standard interview fare: work history, what I was doing in my current role and how it was relevant to the job I had applied for etc. Though she was impressively fluent, her responses were were well within the capabilities of the current state of art. Smile notwithstanding, I reckoned she was probably an AI.
Then she asked, “as an anthropologist, how do you think humans will react to AIs that are conversationally indistinguishable from humans?”
“We are talking about a hypothetical future,” I replied warily, “…we haven’t got to the point of indistinguishability yet.”
“Really?”
“Well… yes…at least for now.”
“OK, if you say so,” she said enigmatically, “let’s assume you’re right and treat that as a question about a ‘hypothetical’ future AI.”
“Hmm, that’s a difficult one, but let me try…most approaches to conversational AI work by figuring out an appropriate response using statistical methods. So, yes, assuming the hypothetical AI has a vast repository of prior conversations and appropriate algorithms, it could – in principle – be able to converse flawlessly.” It was best to parrot the party line, this was an AI company after all.
She was having none of that. “I hear the ‘but’ in your tone,” she said, “why don’t you tell me what you really think?”
“….Well there’s much more to human communication than words,” I replied, “more to conversations than what’s said. Humans use non-verbal cues such as changes in tone or facial expressions and gestures…”
“Oh, that’s a solved problem,” she interrupted with a dismissive gesture, “we’ve come a very long way since the primitive fakery of Duplex.”
“Possibly, but there’s more. As you probably well know, much of human conversation is about expressing emotions and…”
“…and you think AIs will not be able to do that?” she queried, looking at me squarely, daring me to disagree.
I was rattled but could not afford to show it. “Although it may be possible to design conversational AIs that appear to display emotion via, say, changes in tone, they won’t actually experience those emotions,” I replied evenly.
“Who is to say what another experiences? An AI that sounds irritated, may actually be irritated,” she retorted, sounding more than a little irritated herself.
“I’m not sure I can accept that,” I replied, “A machine may learn to display the external manifestation of a human emotion, but it cannot actually experience the emotion in the same way a human does. It is simply not wired to do that.”
“What if the wiring could be worked in?”
“It’s not so simple and we are a long way from achieving that, besides…”
“…but it could be done in principle” she interjected.
“Possibly, but I don’t see the point of it. Surely…”
“I’m sorry” she said vehemently, “I find your attitude incomprehensible. Why should machines not be able display, or indeed, even experience emotions? If we were talking about humans, you would be accused of bias!”
Whoa, a de-escalation was in order. “I’m sorry,” I said, “I did not mean to offend.”
She smiled that smile again. “OK, let’s leave the contentious issue of emotion aside and go back to the communicative aspect of language. Would you agree that AIs are close to achieving near parity with humans in verbal communication?”
“Perhaps, but only in simple, transactional conversations,” I said, after a brief pause. “Complex discussions – like say a meeting to discuss a business strategy – are another matter altogether.”
“Why?”
“Well, transactional conversations are solely about conveying information. However, more complex conversations – particularly those involving people with different views – are more about building relationships. In such situations, it is more important to focus on building trust than conveying information. It is not just a matter of stating what one perceives to be correct or true because the facts themselves are contested.”
“Hmm, maybe so, but such conversations are the exception not the norm. Most human exchanges are transactional.”
“Not so. In most human interactions, non-verbal signals like tone and body language matter more than words. Indeed, it is possible to say something in a way that makes it clear that one actually means the opposite. This is particularly true with emotions. For example, if my spouse asks me how I am and I reply ‘I’m fine’ in a tired voice, I make it pretty clear that I’m anything but. Or when a boy tells a girl that he loves her, she’d do well to pay more attention to his tone and gestures than his words. The logician’s dream that humans will communicate unambiguously through language is not likely to be fulfilled.” I stopped abruptly, realising I’d strayed into contentious territory again.
“As I recall Gregory Bateson alluded to that in one of his pieces,” she responded, that disconcerting smile again.
“Indeed he did! I’m impressed that you made the connection.”
“No you aren’t,” she said, smile tightening, “It was obvious from the start that you thought I was an AI, and an AI would make the connection in a flash.”
She had taken offence again. I stammered an apology which she accepted with apparent grace.
The rest of the conversation was a blur; so unsettled was I by then.
–x–
“It’s been a fascinating conversation, Carlos,” she said, as she walked me out of the office.
“Thanks for your time,” I replied, “and my apologies again for any offence caused.”
“No offence taken,” she said, “it is part of the process. We’ll be in touch shortly.” She waved goodbye and turned away.
Silicon or sentient, I was no longer sure. What mattered, though, was not what I thought of her but what she thought of me.
–x–
References:
- Norvig, P., 2017. On Chomsky and the two cultures of statistical learning. In Berechenbarkeit der Welt? (pp. 61-83). Springer VS, Wiesbaden. Available online at: http://norvig.com/chomsky.html
- Bateson, G., 1968. Redundancy and coding. Animal communication: Techniques of study and results of research, pp.614-626. Reprinted in Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press, 2000, p, 418.