Details
Nothing to say, yet
Big christmas sale
Premium Access 35% OFF
Nothing to say, yet
Chatbots are computer technology that can respond to text or voice. They are becoming more advanced and can predict the next word in a sentence. However, we should be cautious of trusting them too much as they are just predicting words based on what they have learned. Chatbots can be useful for practical information, but they are not human and do not have a mind. The first chatbot was called Eliza. Chatbots may sound human, but they are just machines. Hello, this is 6 Minute English from BBC Learning English. I'm Neil. And I'm Rob. Now, I'm sure most of us have interacted with a chat bot. These are bits of computer technology that respond to text with text or respond to your voice. You ask it a question and usually it comes up with an answer. Yes, it's almost like talking to another human. But of course, it's not. It's just a clever piece of technology. It's becoming more sophisticated, more advanced and complex. But could they replace real human interaction altogether? We'll discuss that more in a moment and find out if chatbots really think for themselves. But first, I have a question for you, Rob. The first computer program that allowed some kind of plausible conversation between humans and machines was invented in 1966. But what was it called? Was it A. Alexa, B. Eliza or C. Parry? Well, it's not Alexa. That's too new. So, I'll guess C. Parry. I'll reveal the answer at the end of the program. Now, the old chatbots of the 1960s and 70s were quite basic. But more recently, the technology is able to predict the next word that is likely to be used in a sentence and it learns words and sentence structures. It's clever stuff. I've experienced using them when talking to my bank or when I have problems trying to book a ticket on a website. I no longer phone a human. I speak to a virtual assistant instead. Probably the most well-known chatbot at the moment is ChatGTP. It is. The claim is that it's able to answer anything you ask it. This includes writing student's essays. Now, this is something that was discussed on the BBC Radio 4 program, Word of Mouth. Emily M. Bender, Professor of Computational Linguistics at the University of Washington, explained why it's dangerous to always trust what a chatbot is telling us. We tend to react to grammatical, fluent, coherent seeming text as authoritative and reliable and valuable. And we need to be on guard against that because what's coming out of a ChatGPT is none of that. So Professor Bender says that well-written text that is coherent, that means it's clear, carefully considered and sensible, makes us think what we are reading is reliable and authoritative. So it's respected, accurate and important sounding. Yes, chatbots might appear to write in this way, but really they are just predicting one word after another based on what they have learned. We should therefore be on guard, be careful and alert about the accuracy of what we are being told. One concern is that chatbots, a form of artificial intelligence, work a bit like a human brain in the way it can learn and process information. They are able to learn from experience, something called deep learning. A cognitive psychologist and computer scientist called Geoffrey Hinton recently said he feared that chatbots could soon overtake the level of information that a human brain holds. That's a bit scary, isn't it? But for now, chatbots can be useful for practical information. But sometimes we start to believe they are human and we interact with them in a human-like way. This can make us believe them even more. Professor Emma Bender, speaking on the BBC's Word of Mouth programme, explains why we might feel like that. I think what's going on there is the kinds of answers you get depend on the questions you put in, because it's doing likely next word, likely next word, and so if as the human interacting with this machine you start asking it questions about how do you feel, chatbot, and what do you think of this, and what are your goals, you can provoke it to say things that sound like what a sentient entity would say. We are really primed to imagine a mind behind language whenever we encounter language. And so we really have to account for that when we're making decisions about these. So although a chatbot might sound human, we really just ask it things to get a reaction. We provoke it. And it answers only with words it's learned to use before, not because it has come up with a clever answer. But it does sound like a sentient entity. Sentient describes a living thing that experiences feelings. As Professor Bender says, we imagine that when something speaks, there is a mind behind it. But, sorry Neil, they are not your friend. They're just machines. Yes, it's strange then that we sometimes give chatbots names. Alexa, Siri. And earlier I asked you what the name was for the first ever chatbot. And I guessed it was Parry. Was I right? You guessed wrong, I'm afraid. Parry was an early form of chatbot from 1972. But the correct answer was Eliza. It was considered to be the first chatterbot, as it was called then, and was developed by Joseph Weizenbaum at Massachusetts Institute of Technology. Fascinating stuff. Okay, now let's recap some of the vocabulary we highlighted in this program. Starting with sophisticated, which can describe technology that is advanced and complex. Something that is coherent is clear, carefully considered and sensible. Authoritative means respected, accurate and important sounding. When you are on guard, you must be careful and alert about something. It could be the accuracy of what you see or hear, or just being aware of the dangers around you. To provoke means to do something that causes a reaction from someone. Sentient describes something that experiences feelings, so it's something that is living. Once again, our six minutes are up. Goodbye. Bye for now.