Human-robot interactions take step forward with 'emotional' chatbot

Thứ Bảy, 06/05/2017, 11:31
An “emotional chatting machine” has been developed by scientists, signalling the approach of an era in which human-robot interactions are seamless and go beyond the purely functional.

The ECM, as it is known for short, was able to produce factually coherent answers whilst also imbuing its conversation with emotions such as happiness, sadness or disgust.

The paper found that 61% of humans who tested the machine favoured the emotional versions to the neutral chatbot. Similar results have been found in so-called “Wizard of Oz” studies in which a human typing responses masquerades as advanced AI.

Minlie Huang, a computer scientist at Tsinghua University, Beijing and co-author, said: “We’re still far away from a machine that can fully understand the user’s emotion. This is just the first attempt at this problem.”

The chatbot signals the approach of an era of sophisticated human-robot interactions - although perhaps not quite as sophisticated (or sinister) as that seen in Ex Machina. Photo: Publicity image

Huang and colleagues started by creating an “emotion classifying” algorithm that learned to detect emotion from 23,000 posts taken from the Chinese social media site Weibo. The posts had been manually classified by humans as sad, happy and so on.

The emotion classifier was then used to tag millions of social media interactions according to emotional content. This huge dataset served as a training ground for the chatbot to learn both how to answer questions and how to express emotion.

The resulting program could be switched into five possible modes – happy, sad, angry, disgusted, liking – depending on the user’s preference. In one example conversation a user typed in: “Worst day ever. I arrived late because of the traffic.”

In neutral mode, the chatbot droned: “You were late”. Alternative responses were: “Sometimes life just sucks!” (disgust mode), “I am always here to support you” (liking) or “Keep smiling! Things will get better” (happy – or, some might say, annoyingly chipper).

In the future, the team predict the software could also learn the appropriate emotion to express at a given time. “It could be mostly empathic,” said Huang, adding that a challenge would be to avoid the chatbot reinforcing negative feelings such as rage.

Until recently chatbots were widely regarded as a sideshow to more serious attempts at tackling machine intelligence. A chatbot known as Eugene Goostman managed to convince some judges they were talking to a human – but only by posing as a 13-year old Ukrainian boy with a limited grasp of English. Microsoft’s disastrous chatbot Tay was supposed to learn to chat from Twitter interactions, but was terminated after becoming a genocide-supporting Nazi less than 24 hours after being let loose on the internet.

The latest study shows that chatbots, driven by a machine learning approach, are starting to make significant headway. Sandra Wachter, a computer scientist at the Oxford Internet Institute, said that in future such algorithms are likely to be personalised. “Some of us prefer a tough-love pep talk, others prefer someone to rant with,” she said. “Humans often struggle with appropriate responses because of the complexity of emotions, so building technologies that could decipher accurately our ‘emotional code’ would be very impressive.”

As the stilted computer interactions of today are replaced by something approaching friendly chit-chat, new risks could be encountered.

The Guardian