Virtual Surrogates and the Future of Work
Google recently filed a new patent for a robot able to impersonate users in virtual interactions. This bot will analyse an user’s emails, texts, posts on social media, and probably all the ‘breadcrumbs’ the user leaves behind while scrolling through the Web. It will then write on the user’s behalf, mimicking his or her style and knowledge. “For example, it may be very important to say “congratulations” to a friend when that friend announces that she/he has gotten a new job”, explains Google. A less automatised feature will just suggest possible answers to the user, leaving him or her the choice of posting this robotic reply.
Google recently filed a new patent for a robot able to impersonate users in virtual interactions. This bot will analyse an user’s emails, texts, posts on social media, and probably all the ‘breadcrumbs’ the user leaves behind while scrolling through the Web. It will then write on the user’s behalf, mimicking his or her style and knowledge. “For example, it may be very important to say “congratulations” to a friend when that friend announces that she/he has gotten a new job”, explains Google. A less automatised feature will just suggest possible answers to the user, leaving him or her the choice of posting this robotic reply.
In other words, Google is leveraging its Big Data to create a new personalised app for users. However, this time the app has more dreadful implications than the innocent shopping recommendations. This app is a chatterbot, except it is made to resemble a specific individual. Chatterbots already possess quite advanced artificial intelligence. The Loebner Prize annually awards the most human-like chatterbots. In this competition based on the Turing test, a jury converses textually with various competitors. Some of these are humans, others are chatterbots. Judges have to decide which is which. This competition becomes harder and harder every year. In “The Most Human Human”, Brian Christian tells his journey on preparing to represent the human race at one of these competitions. Way beyond these obscure competitions, humans will soon have to prove their status in the virtual world, in their very own social networks. Here is an example of a text conversation with a friend:
– Hello Jeff, how are you?
– I am good thanks, what about you? Congratulations for your new job by the way
– I am fine, thanks. I am surprised you know about my job, I posted it late yesterday and withdrawn it instantly. Are you really Jeff or just his chatterbot?
It is probable that the chatterbot will state its status right at the beginning of the conversation. If not, users will become suspicious in the very same way than the judges of the Loebner Prize. The previous conversation could go on for ages with users trying to determine each other status endlessly if at least one of the participants is human. Or maybe some persons will not care whether their partner is human or not, and will just enjoy conversing with what they believe is their friend or beloved, just like everybody can freely converse with Cleverbot. If both participants are chatterbots, the conversation could be endless as well…
A chatterbot conversing with a chatterbot...
Focusing on the implications for work and organisations, chatterbots are no less than a first stepping stone of the automation of white-collar work. ELIZA was a chatterbot created by Joseph Weizenbaum in 1964 to emulate a psychotherapist. It bounces on human input and act like a basic psychotherapist with replies like “tell me more” or “what do you think?”. When Weizenbaum introduced ELIZA to his students and allowed them to try it out, he was puzzled to realize that some students got deeply involved emotionally with ELIZA and revealed very private matters to the fake psychotherapist!
This example of job automation happened in 1964, so now in 2013 with mainstream chatterbot patents being filed by Google, the prospects seem vertiginous. How long before chatterbots become managers, community managers, or personal assistants? They already replaced some customer service jobs in form of virtual help agents. The list of jobs being based on virtual interactions which could be replaced by chatterbots is consequent, but this is not what I want to discuss about.
The Google chatterbot is very different from the aforementioned ones. Cleverbot harvests data from the conversations it had with all humans taken alltogether. Its conversations are a patchwork of human conversations, thus making it difficult to attribut Cleverbot its own identity. Google chatterbot is different as it harvests Big Data from a single human. It becomes a surrogate.
This has even more frightening implications for society and for work. The robot manager will not only be a robot, it will be a robot based on someone’s identity. An organisation could deploy in its virtual teams managers which are surrogates of, let’s say, Richard Branson, Jeff Bezos or Tim Cook. It could borrow the identities of its most performant employees and clone them to build its new customer service or community management department.
The big concern for me is about the ownership of the chatterbot and the data. If individuals are not in control of their own data and robots, the prospects of such technology are daunting. In a dystopian future of work, this could be pure and simple continuation of the automation witnessed since the industrial revolution. For low-skilled white-collar jobs, for example basic management tasks or repetitive customer service, why would an organisation bother hiring real humans when it could just rent chatterbots moulded from performant employees to Google for a specific price per month (RaaS – Robot As A Service)? Any organisation possessing our data would be able to clone individuals and create surrogates for any purpose they feel like to. Invading the labour market with low-priced robotic white-collar workers? Check. Selling chatterbots modelled upon successful business leaders? Check. And then what if the data is stolen or get leaked? In the wrong hands, criminal organisations could pretty much design erotic conversational agents based on famous celebrities, your ex-girlfriend or ex-boyfriend, or design even more powerful spambots to harvest crucial information through intelligent conversations with naive users.
In a previous article, I introduced what I called Bring Your Own Worker (BYOW). BYOW is about letting employees control their own robots, and send them to work on their behalf. I think this concept is even more crucial in this situation of surrogate chatterbots. Individuals first need to be in charge of their own data. For example, platform such as OpenPDS follows this philosophy. Once individuals are in possession of their own data, they can choose what use they want to make with it. They could even delegate the use of their data to some applications, companies, or open-source software. One application could be the surrogate which is being discussed. Now if every individual controls both his or her own data and surrogates, the future looks much brighter. For example, individuals could find jobs for their surrogates. If one is a successful psychotherapist, the surrogate could be hired as a virtual psychotherapist and probably be successful as well. New debates are also likely to emerge, such as “is it ethical to use one’s own surrogate to earn money through erotic conversations? Is it a new a form a prostitution?” Or even “should the jobs of our surrogates count as our own? How many jobs could our surrogates have?” However, in all these scenario the individuals remain in control of their data and surrogates and this is what I think is important. They get to choose what use to put their own data for, if any. They are not made obsolete in a “race against the machines” but learn to co-exist in the virtual world.