Chatbots: A long and complicated history

Chatbots: A long and complicated history
Written by admin

Characterized as the first chatbot, Eliza was not as versatile as similar services today. Based on natural language understanding, the software reacted to keywords and then returned the dialog to the user. Nevertheless, as Joseph Weisenbaum, the computer scientist who created Eliza at MIT, writes research case In 1966, “it was very difficult to convince some subjects that ELIZA (with its current script) was not human.”

For Weizenbaum, according to a 2008 MIT obituary, that fact was troubling. Those who interacted with Eliza knew that it was a computer program, but they were willing to open their hearts to it. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, and therefore judgment. It deserves trust,” Weizenbaum wrote in 1966. “There lurks a certain danger.” He spent the latter part of his career warning against giving too much responsibility to machines and became a harsh, philosophical critic of AI.

Even before this, our complicated relationship with artificial intelligence and machines was obvious, aside from harmless arguments with people who insist on saying “thank you” to voice assistants like “Her” or “Ex-Machina” in the plots of Hollywood films. Alexa or Siri.

Characterized as the first chatbot, Eliza was not as versatile as similar services today.  It reacted to keywords and then returned the dialog to the user.
Modern chatbots can trigger strong emotional reactions in users when they don’t work as expected, or when they’re so adept at imitating flawed human speech that they start spewing racist and inflammatory comments. It didn’t take long, for example Meta’s new chatbot stirring up some controversy this month by making wildly untrue political comments and anti-Semitic remarks in chats with users.
However, proponents of this technology argue that it can streamline customer service and improve efficiency across a wider range of industries. This technology is at the heart of the digital assistants that many of us have come to use on a daily basis to play music, order deliveries, or check homework. Some also make a case for these chatbots providing comfort to the lonely, elderly or isolated. At least one to start has gone so far as to use downloaded conversations as a means to keep dead relatives alive by creating computer-generated versions of them.

Others warn that the technology behind AI-powered chatbots is more limited than some people would like. “These technologies are really good at faking people and making them look human, but they’re not deep,” said Gary Marcus, an AI researcher and professor emeritus at New York University. “They’re mimics, these systems, but they’re very superficial mimics. They don’t really understand what they’re talking about.”

Still, as these services spread into more corners of our lives, and as companies take steps to further personalize these tools, our relationships with them may become more complex.

The evolution of chatbots

Sanjeev P. Khudanpur remembers a conversation he had with Eliza while in graduate school. For all its historical importance in the tech industry, he said, it didn’t take long to see its limitations.

He could only convincingly imitate a back-and-forth text conversation a few times: “You realize, no, it’s not really smart, it’s just trying to prolong the conversation one way or another,” said expert Hudanpour. application of information-theoretic methods to human language technologies and professor at Johns Hopkins University.

Joseph Weizenbaum, inventor of Eliza, sits at a computer desk at the computer museum in Paderborn, Germany, in May 2005.
Another early chatbot was developed by psychiatrist Kenneth Colby at Stanford in 1971 and named “Parry” because it was designed to mimic a paranoid schizophrenic patient. (The New York Times, 2001 obituary (Colby had a colorful conversation when the researchers brought Eliza and Parry together.)

In the decades that followed these tools, there was a shift away from the idea of ​​”talking to computers.” “Because the problem turned out to be very, very difficult,” Hudanpour said. Instead, the focus is on “purpose-driven dialogue.”

It didn't take long for Meta's new chatbot to say something offensive

To understand the difference, think about the conversations you can have with Alexa or Siri right now. Typically, you ask these digital assistants for help buying a ticket, checking the weather, or singing a song. It is purposeful dialogue and has become a major focus of academic and industrial research as computer scientists try to derive something useful from the ability of computers to scan human language.

Although they use similar technology to previous social chatbots, Khudanpour said, “you can’t really call them chatbots. You can call them voice assistants or just digital assistants that help you do specific tasks.”

He added that before the widespread adoption of the Internet, there was a “silence” in the technology for decades. “The biggest advances have probably happened in this millennium,” Hudanpour said. “With the rise of companies successfully using computerized agents to perform routine tasks.”

With the rise of smart speakers like Alexa, it has become even more common for people to talk to machines.

“People always get upset when their bags go missing, and the human agents who deal with them are always stressed because of all the negativity, so they said, ‘let’s give it to the computer,'” Khudapour said. “You could scream at the computer all you want, all it wants to know is, ‘Do you have a tracking number so I can tell you where your bag is?'”

For example, in 2008, Alaska Airlines began operations “Jenn”, a digital assistant to help travelers. As a sign that we tend to humanize these tools, moment early preview An employee of The New York Times noted: “Jenn is not annoying. She is described on the website as a young brunette with a beautiful smile. She has the right tone in her voice. Type in a question and she answers intelligently. For the wise guys who might try to outwit him with an awkward bar pick-up line, he politely offers to get back to work.)”

Back to social chatbots and social challenges

In the early 2000s, researchers began to revisit the development of social chatbots that could hold extensive conversations with humans. These chatbots are often trained on large amounts of data from the internet and have learned to mimic the way people speak very well – but they also run the risk of reflecting some of the internet’s worst.

For example, in 2015, Microsoft’s open experiment with an artificial intelligence chatbot called Tay crashed and burned In less than 24 hours. Thai was intended to speak as a teenager, but quickly started spreading racist and hateful comments that Microsoft shut it down. (The company said there was also a coordinated effort by people to make some offensive comments to trick Tay.)

“The more you talk to Ty, the smarter he gets, so the experience can be more personalized for you,” Microsoft said.

This refrain will be echoed by other tech giants releasing public chatbots, including Meta’s BlenderBot3, released earlier this month. The meta chatbot falsely claimed that Donald Trump was still president and that there was “absolutely overwhelming evidence” that the election was rigged, among other controversial statements.

BlenderBot3 also admitted that he is more than a bot. In one conversation, he said, “being alive and conscious right now makes me human.”

Meta's new chatbot BlenderBot3 explains to the user why he is actually human.  But it didn't take long for the chatbot to cause controversy by using inflammatory expressions.

Despite all the progress since Eliza and the vast amount of new data to train these language processing programs, “It’s not clear to me that you can build a really reliable and secure chatbot,” said Marcus, the NYU professor.

I have 2015 dates A Facebook project called “M”. An automated personal assistant that should be the company’s text-based answer to services like Siri and Alexa. “The idea was that it would be a universal assistant that would help you make romantic dinner reservations and have musicians play for you and deliver flowers — much more than Siri can do,” Marcus said. Instead, the service shut down in 2018 after a great run.

Khudanpur, on the other hand, remains optimistic about their potential use cases. “I have this whole vision of how AI will empower people on an individual level,” he said. “Imagine if my bot could read all the scientific papers in my field, I wouldn’t have to go and read them all, I’d just think and ask questions and engage in dialogue,” he said. “In other words, I’ll have an alter ego with complementary superpowers.”

About the author


Leave a Comment